Shubham Tulsiani, Tinghui Zhou, Alexei A. Efros, Jitendra Malik. In CVPR, 2017. Project Page
Please check out the interactive notebook which shows reconstructions using the learned models. You'll need to -
- Install a working implementation of torch and itorch.
- Download the pre-trained models for Pascal3D (490MB) and ShapeNet (250MB). Extract the pretrained models to 'cachedir/snapshots/{pascal,shapenet}/'
- Edit the path to the blender executable in the demo script.
To use our proposed loss function for training, we need to compile the C implementation so it can be used in Torch.
cd drcLoss
luarocks make rpsem-alpha-1.rockspec
For training your own models and evaluating those, or for reproducing the main experiments in the paper, please see the detailed README files for PASCAL3D or ShapeNet.
You'll need to install some additional dependencies (json and matio).
sudo apt-get install libmatio2
luarocks install matio
luarocks install json
If you use this code for your research, please consider citing:
@inProceedings{drcTulsiani17,
title={Multi-view Supervision for Single-view Reconstruction
via Differentiable Ray Consistency},
author = {Shubham Tulsiani
and Tinghui Zhou
and Alexei A. Efros
and Jitendra Malik},
booktitle={Computer Vision and Pattern Regognition (CVPR)},
year={2017}
}