Skip to content

Latest commit

 

History

History
101 lines (74 loc) · 4.23 KB

dataset.md

File metadata and controls

101 lines (74 loc) · 4.23 KB

Environment

  • freeglut (sudo apt-get install freeglut3-dev)
  • (optional) EGL used for headless rendering (apt install libgl1-mesa-dri libegl1-mesa libgbm1)

⚠️ For EGL headless rendering (without screen, such as clusters), please export PYOPENGL_PLATFORM=egl before running these scripts, otherwise, unset PYOPENGL_PLATFORM.

⚠️ If the program runs so slowly and is stuck in mesh.ray.intersects_any, uninstall and reinstall pyembree and trimesh, more details in issue #62.

THuman2.0

Please refer to THuman2.0-Dataset to download the original scans into data/thuman2/scans. Then generate all.txt by ls > ../all.txt under data/thuman2/scans, which contains all the subject names (0000~0525).

The SMPL and SMPLX fits could be downloaded as follows:

wget https://download.is.tue.mpg.de/icon/SMPL-X.zip --no-check-certificate -O ./data/thuman2/SMPL-X.zip
unzip ./data/thuman2/SMPL-X.zip -d ./data/thuman2/
rm ./data/thuman2/SMPL-X.zip

👀 ./sample_data contains one example of THuman2.0 which shows the data folder structure. Note that PaMIR only support SMPL, if you want to use SMPL-X instead, please refer to ./scripts/tetrahedronize_scripits to generate necessary data used for voxelization.

Debug Mode

conda activate icon
python -m scripts.render_batch -debug -headless
python -m scripts.visibility_batch -debug

Then you will get the rendered samples & visibility results under debug/

Generate Mode

1. Rendering phrase: RGB images, normal images, calibration array. If you need the depth maps, just update render_batch.py as follows

render_types = ["light", "normal"]
--->
render_types = ["light", "normal", "depth"]

Then run render_batch.py, which will take 20min for THuman2.

conda activate icon
python -m scripts.render_batch -headless -out_dir data/

2. Visibility phrase: SMPL-X based visibility computation

python -m scripts.visibility_batch -out_dir data/

✅ NOW, you have all the synthetic dataset under data/thuman2_{num_views}views, which will be used for training.

➡️ NEXT, please jump to Training Instruction for more details.

Examples

RGB Image Normal(Front) Normal(Back) Normal(SMPL-X, Front) Normal(SMPL-X, Back)
Visibility Depth(Front) Depth(Back) Depth(SMPL-X, Front) Depth(SMPL-X, Back)

Citation

If you use this dataset for your research, please consider citing:

@InProceedings{tao2021function4d,
  title={Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer RGBD Sensors},
  author={Yu, Tao and Zheng, Zerong and Guo, Kaiwen and Liu, Pengpeng and Dai, Qionghai and Liu, Yebin},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR2021)},
  month={June},
  year={2021},
}

This PyTorch Dataloader benefits a lot from MonoPortDataset, so please consider citing:

@inproceedings{li2020monoport,
  title={Monocular Real-Time Volumetric Performance Capture},
  author={Li, Ruilong and Xiu, Yuliang and Saito, Shunsuke and Huang, Zeng and Olszewski, Kyle and Li, Hao},
  booktitle={European Conference on Computer Vision},
  pages={49--67},
  year={2020},
  organization={Springer}
}
  
@incollection{li2020monoportRTL,
  title={Volumetric human teleportation},
  author={Li, Ruilong and Olszewski, Kyle and Xiu, Yuliang and Saito, Shunsuke and Huang, Zeng and Li, Hao},
  booktitle={ACM SIGGRAPH 2020 Real-Time Live},
  pages={1--1},
  year={2020}
}