This repo provides the inference Script for Keypoint-Based Control of MOFA-Video that supports long video generation via the proposed periodic sampling strategy.
git clone https://github.com/MyNiuuu/MOFA-Video.git
cd ./MOFA-Video
This script has been tested on CUDA version of 11.7.
cd ./MOFA-Video-Keypoint
conda create -n mofa_ldmk python==3.10
conda activate mofa_ldmk
pip install -r requirements.txt
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
cd ..
-
Download the checkpoint of CMP from here and put it into
./MOFA-Video-Keypoint/models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints
. -
Download the
ckpts
folder from the huggingface repo which contains necessary pretrained checkpoints and put it under./MOFA-Video-Keypoint
. You may usegit lfs
to download the entireckpts
folder:-
Download
git lfs
from https://git-lfs.github.com. It is commonly used for cloning repositories with large model checkpoints on HuggingFace.NOTE: If you encounter the error
git: 'lfs' is not a git command
on Linux, you can try this solution that has worked well for my case. -
Execute
git clone https://huggingface.co/MyNiuuu/MOFA-Video-Hybrid
to download the complete HuggingFace repository, which includes theckpts
folder. -
Copy or move the
ckpts
folder to./MOFA-Video-Keypoint
.
-
cd ./MOFA-Video-Keypoint
chmod 777 inference.sh
./inference.sh