Under development.
Code repository for the paper: Reconstructing Hands in 3D with Transformers
Georgios Pavlakos, Dandan Shan, Ilija Radosavovic, Angjoo Kanazawa, David Fouhey, Jitendra Malik
First you need to clone the repo:
git clone --recursive git@github.com:geopavlakos/hamer.git
cd hamer
We recommend creating a virtual environment for HaMeR. You can use venv:
python3.10 -m venv .hamer
source .hamer/bin/activate
or alternatively conda:
conda create --name hamer python=3.10
conda activate hamer
Then, you can install the rest of the dependencies. This is for CUDA 11.7, but you can adapt accordingly:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu117
pip install -e .[all]
pip install -v -e third-party/ViTPose
You also need to download the trained models:
bash fetch_demo_data.sh
Besides these files, you also need to download the MANO model. Please visit the MANO website and register to get access to the downloads section. We only require the right hand model. You need to put MANO_RIGHT.pkl
under the _DATA/data/mano
folder.
python demo.py \
--img_folder example_data --out_folder demo_out \
--batch_size=48 --side_view --save_mesh --full_frame
First, download the training data to ./hamer_training_data/
by running:
bash fetch_training_data.sh
Then you can start training using the following command:
python train.py exp_name=hamer data=mix_all experiment=hamer_vit_transformer trainer=gpu launcher=local
Checkpoints and logs will be saved to ./logs/
.
Parts of the code are taken or adapted from the following repos:
Additionally, we thank StabilityAI for a generous compute grant that enabled this work.
If you find this code useful for your research, please consider citing the following paper:
@inproceedings{pavlakos2023reconstructing,
title={Reconstructing Hands in 3{D} with Transformers},
author={Pavlakos, Georgios and Shan, Dandan and Radosavovic, Ilija and Kanazawa, Angjoo and Fouhey, David and Malik, Jitendra},
booktitle={arxiv},
year={2023}
}