Skip to content

Weakly Supervised 3D Object Detection from Point Clouds (VS3D), ACM MM 2020

License

Notifications You must be signed in to change notification settings

shinke-li/Weakly-Supervised-3D-Object-Detection

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Weakly Supervised 3D Object Detection from Point Clouds (VS3D)

Created by Zengyi Qin, Jinglu Wang and Yan Lu. The repository contains an implementation of this ACM MM 2020 Paper. Readers are strongly recommended to create and enter a virtual environment with Python 3.6 before running the code.

Quick Demo with Jupyter Notebook

Clone this repository:

git clone https://github.com/Zengyi-Qin/Weakly-Supervised-3D-Object-Detection.git

Enter the main folder and run installation:

pip install -r requirements.txt

Download the demo data to the main folder and run unzip vs3d_demo.zip. Readers can try out the quick demo with Jupyter Notebook:

cd core
jupyter notebook demo.ipynb

Training

Download the Kitti Object Detection Dataset (image, calib and label) and place them into data/kitti. Download the ground planes and front-view XYZ maps from here and run unzip vs3d_train.zip. Download the pretrained teacher network from here and run unzip vs3d_pretrained.zip. The data folder should be in the following structure:

├── data
│   ├── demo
│   └── kitti
│       └── training
│           ├── calib
│           ├── image_2
│           ├── label_2
│           ├── sphere
│           ├── planes
│           └── velodyne
│       ├── train.txt
│       └── val.txt
│   └── pretrained
│       ├── student
│       └── teacher

The sphere folder contains the front-view XYZ maps converted from velodyne point clouds using the script in ./preprocess/sphere_map.py. After data preparation, readers can train VS3D from scratch by running:

cd core
python main.py --mode train --gpu GPU_ID

The models are saved in ./core/runs/weights during training. Reader can refer to ./core/main.py for other options in training.

Inference

Readers can run the inference on KITTI validation set by running:

cd core
python main.py --mode evaluate --gpu GPU_ID --student_model SAVED_MODEL

Readers can also directly use the pretrained model for inference by passing --student_model ../data/pretrained/student/model_lidar_158000. Predicted 3D bounding boxes are saved in ./output/bbox in KITTI format.

About

Weakly Supervised 3D Object Detection from Point Clouds (VS3D), ACM MM 2020

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 56.0%
  • Python 44.0%