FastPose is a small and fast multi-person pose estimator which use middle point to do the keypoint grouping. It is the 46% smaller and 47% faster (forward time) than OpenPose (without using existing model compression and acceleration methods like MobileNet, Quantization, etc). For more detail, please refer to the technical report.
- Get the code.
git clone https://github.com/ZexinChen/FastPose.git
- Install pytorch 0.4.0 and other dependencies.
pip install -r requirements.txt
- Download the models manually:
fastpose.pth (Google Drive | Baidu pan). Place it into
./network/weights
.
You can run the code in the ./picture_demo.ipynb
to see the demo of your own image by changing test_image path
- Prepare COCO dataset:
a. Download COCO.json (Google Drive | Baidu pan | Dropbox). Place it into./data/coco/
.
b. Download mask.tar.gz (Google Drive | Baidu pan). Untar it into./data/coco/
.
c. Download COCO dataset (2014)
bash ./training/getData.sh
The data
folder should as followed:
-data
-coco
-COCO.json
-mask
-annotations
-images
-train2014
-val2014
- Run the training script. The default should work fine.
CUDA_VISILBE_DIVECES=0,1 python3 train.py
FastPose is developed and maintained by Zexin Chen and Yuliang Xiu. Some codes is brought from pytorch-version OpenPose. Thanks to the original authors.
FastPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, contact Cewu Lu