Skip to content
This repository has been archived by the owner on Oct 9, 2018. It is now read-only.

Code repo for realtime multi-person pose estimation, without using any person detector.

License

Notifications You must be signed in to change notification settings

ascentai/Realtime_Multi-Person_Pose_Estimation

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Realtime Multi-Person Pose Estimation

By Zhe Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh.

Introduction

Code repo for winning 2016 MSCOCO Keypoints Challenge, ECCV Best Demo Award.

Watch our [video result] (https://www.youtube.com/watch?v=pW6nZXeWlGM&t=77s) in YouTube or here.

We present a bottom-up approach for multi-person pose estimation, without using any person detector. For more details, refer to our Arxiv paper and presentation slides at ILSVRC and COCO workshop 2016.

This project is licensed under the terms of the GPL v3 license License.

Contact: Zhe Cao Email: zhecao@cmu.edu

Results

Contents

  1. Testing
  2. Training
  3. Citation

Testing

C++ (realtime version)

  • Use our modified caffe: caffe_rtpose. Follow the instruction on that repo.
  • Three input options: images, video, webcam

Matlab (slower)

  • Compatible with general Caffe. Compile matcaffe.
  • Run cd testing; get_model.sh to retreive our latest MSCOCO model from our web server.
  • Change the caffepath in the config.m and run demo.m for an example usage.

Python

  • cd testing/python
  • ipython notebook
  • Open demo.ipynb and execute the code

Training

Network Architecture

Teaser?

Training Steps

  • Run cd training; bash getData.sh to obtain the COCO images in dataset/COCO/images/, keypoints annotations in dataset/COCO/annotations/ and COCO official toolbox in dataset/COCO/coco/.
  • Run getANNO.m in matlab to convert the annotation format from json to mat in dataset/COCO/mat/.
  • Run genCOCOMask.m in matlab to obatin the mask images for unlabled person. You can use 'parfor' in matlab to speed up the code.
  • Run genJSON('COCO') to generate a json file in dataset/COCO/json/ folder. The json files contain raw informations needed for training.
  • Run python genLMDB.py to generate your LMDB. (You can also download our LMDB for the COCO dataset (189GB file) by: bash get_lmdb.sh)
  • Download our modified caffe: caffe_train. Compile pycaffe. It will be merged with caffe_rtpose (for testing) soon.
  • Run python setLayers.py --exp 1 to generate the prototxt and shell file for training.
  • Download VGG-19 model, we use it to initialize the first 10 layers for training.
  • Run bash train_pose.sh 0,1 (generated by setLayers.py) to start the training with two gpus.

Related repository

CVPR'16, Convolutional Pose Machines

Citation

Please cite the paper in your publications if it helps your research:

@article{cao2016realtime,
  title={Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
  author={Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
  journal={arXiv preprint arXiv:1611.08050},
  year={2016}
  }
  
@inproceedings{wei2016cpm,
  author = {Shih-En Wei and Varun Ramakrishna and Takeo Kanade and Yaser Sheikh},
  booktitle = {CVPR},
  title = {Convolutional pose machines},
  year = {2016}
  }

About

Code repo for realtime multi-person pose estimation, without using any person detector.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.3%
  • MATLAB 1.5%
  • Other 0.2%