Skip to content
forked from nkolot/SPIN

Repository for the paper "Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop"

License

Notifications You must be signed in to change notification settings

CNCLgithub/SPIN

 
 

Repository files navigation

SPIN - SMPL oPtimization IN the loop

Code repository for the paper:
Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop
Nikos Kolotouros*, Georgios Pavlakos*, Michael J. Black, Kostas Daniilidis
ICCV 2019
[paper] [project page]

teaser

Installation instructions

If you want to run the inference code and these instructions are not compatible with your setup, we have updated the installation procedure and inference code to be compatible with recent cuda/pytorch versions. Please check the cuda11_fix branch.

We suggest to use the docker image we provide that has all dependencies compiled and preinstalled. Alternatively you can create a python3 virtual environment and install all the relevant dependencies as follows:

virtualenv spin -p python3
source spin/bin/activate
pip install -U pip
pip install -r requirements.txt

If you choose to use a virtual environment, please look at the instructions for installing pyrender.

After finishing with the installation, you can continue with running the demo/evaluation/training code. In case you want to evaluate our approach on Human3.6M, you also need to manually install the pycdf package of the spacepy library to process some of the original files. If you face difficulties with the installation, you can find more elaborate instructions here.

Fetch data

We provide a script to fetch the necessary data for training and evaluation. You need to run:

./fetch_data.sh

The GMM prior is trained and provided by the original SMPLify work, while the implementation of the GMM prior function follows the SMPLify-X work. Please respect the license of the respective works.

Besides these files, you also need to download the SMPL model. You will need the neutral model for training and running the demo code, while the male and female models will be necessary for evaluation on the 3DPW dataset. Please go to the websites for the corresponding projects and register to get access to the downloads section. In case you need to convert the models to be compatible with python3, please follow the instructions here.

Final fits

We also release the improved fits that our method produced at the end of SPIN training. You can download them from here. Each .npz file contains the pose and shape parameters of the SMPL model for the training examples, following the order of the training .npz files. For each example, a flag is also included, indicating whether the quality of the fit is acceptable for training (following an automatic heuristic based on the joints reprojection error).

Run demo code

To run our method, you need a bounding box around the person. The person needs to be centered inside the bounding box and the bounding box should be relatively tight. You can either supply the bounding box directly or provide an OpenPose detection file. In the latter case we infer the bounding box from the detections.

In summary, we provide 3 different ways to use our demo code and models:

  1. Provide only an input image (using --img), in which case it is assumed that it is already cropped with the person centered in the image.
  2. Provide an input image as before, together with the OpenPose detection .json (using --openpose). Our code will use the detections to compute the bounding box and crop the image.
  3. Provide an image and a bounding box (using --bbox). The expected format for the json file can be seen in examples/im1010_bbox.json.

Example with OpenPose detection .json

python3 demo.py --checkpoint=data/model_checkpoint.pt --img=examples/im1010.jpg --openpose=examples/im1010_openpose.json

Example with predefined Bounding Box

python3 demo.py --checkpoint=data/model_checkpoint.pt --img=examples/im1010.jpg --bbox=examples/im1010_bbox.json

Example with cropped and centered image

python3 demo.py --checkpoint=data/model_checkpoint.pt --img=examples/im1010.jpg

Running the previous command will save the results in examples/im1010_{shape,shape_side}.png. The file im1010_shape.png shows the overlayed reconstruction of the model on the image. We also render a side view, saved in im1010_shape_side.png.

Run evaluation code

Besides the demo code, we also provide code to evaluate our models on the datasets we employ for our empirical evaluation. Before continuing, please make sure that you follow the details for data preprocessing.

Example usage:

python3 eval.py --checkpoint=data/model_checkpoint.pt --dataset=h36m-p1 --log_freq=20

Running the above command will compute the MPJPE and Reconstruction Error on the Human3.6M dataset (Protocol I). The --dataset option can take different values based on the type of evaluation you want to perform:

  1. Human3.6M Protocol 1 --dataset=h36m-p1
  2. Human3.6M Protocol 2 --dataset=h36m-p2
  3. 3DPW --dataset=3dpw
  4. LSP --dataset=lsp
  5. MPI-INF-3DHP --dataset=mpi-inf-3dhp

You can also save the results (predicted SMPL parameters, camera and 3D pose) in a .npz file using --result=out.npz.

For the MPI-INF-3DHP dataset specifically, we include evaluation code only for MPJPE (before and after alignment). If you want to evaluate on all metrics reported in the paper you should use the official MATLAB test code provided with the dataset together with the saved detections.

Run training code

Due to license limitiations, we cannot provide the SMPL parameters for Human3.6M (recovered using MoSh). Even if you do not have access to these parameters, you can still use our training code using data from the other datasets. Again, make sure that you follow the details for data preprocessing.

Example usage:

python3 train.py --name train_example --pretrained_checkpoint=data/model_checkpoint.pt --run_smplify

You can view the full list of command line options by running python3 train.py --help. The default values are the ones used to train the models in the paper. Running the above command will start the training process. It will also create the folders logs and logs/train_example that are used to save model checkpoints and Tensorboard logs. If you start a Tensborboard instance pointing at the directory logs you should be able to look at the logs stored during training.

Citing

If you find this code useful for your research or the use data generated by our method, please consider citing the following paper:

@Inproceedings{kolotouros2019spin,
  Title          = {Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop},
  Author         = {Kolotouros, Nikos and Pavlakos, Georgios and Black, Michael J and Daniilidis, Kostas},
  Booktitle      = {ICCV},
  Year           = {2019}
}

About

Repository for the paper "Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.4%
  • Jupyter Notebook 4.1%
  • Shell 0.5%