Skip to content

shinke-li/reproduce-campus3d

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reproducibility of "Campus3D:A Photogrammetry Point Cloud Benchmark for Outdoor Scene Hierarchical Understanding"

Introduction

The repository contains the re-implementation of this ACM MM 2020 Paper based on the repository. It also presents the reproduced results of the supported paper with trained models in MODEL ZOO. The reduced version of Campus3D dataset can be donwloaded from the official website or the alternative.

Installation

The whole package can be downloaded by the following command.

git clone https://github.com/Yuqing-Liao/reproduce-campus3d.git

Dependencies can be installed using the provided script.

cd reproduce-campus3d
pip install -r requirements.txt

Compressed Campus3D dataset file campus3d-reduce.zip can be downloaded from official website. Put it into data/ and unzip with below script.

cd reproduce-campus3d/data
unzip campus3d-reduce.zip

Training & Evaluation

Train from scratch

To apply training of the model, please first check the configuration files in config/. Particularly you need to change the value of IS_PRETRAINED to false and then run experiments, eg:

cd reproduce-campus3d
python run.py --model 'pointnet2' --mc_level -1 --exp_name 'EXP_NAME'

The 'EXP_NAME' is the user-defined name. In this way, the models will be saved in checkpoints/EXP_NAME/models, and other output files will be saved in checkpoints/EXP_NAME.

Train from pretrained model

Pretrained models are available on Google Drive, and they can be downloaded through the link presented in the following table. You can train either from the downloaded models or from your own pretrained models. To apply training of the model, please first check the configuration files in config/. Particularly you need to change the value of IS_PRETRAINED to false, PRETRAINED_MODEL_PATH to the path of the model to train and then run experiments, eg:

cd reproduce-campus3d
python run.py --model 'pointnet2' --mc_level -1 --exp_name 'EXP_NAME'

In this way, the models will be saved in checkpoints/EXP_NAME/models, and other output files will be saved in checkpoints/EXP_NAME.

Evaluation

To apply evaluation of the model on the test set, please first check the configuration files in config/. Particularly you need to change the value of PRETRIANED_MODEL_PATH to the path of the model to evaluate and then run experiments, eg:

cd reproduce-campus3d
python run.py --eval true --model 'pointnet2' --mc_level -1 --exp_name 'EXP_NAME'

In this way, the output files will be saved in check/EXP_NAME.

Experiments

Hierarchical Learning (HL) Experiments

The hierarchical learning experiments were proposed to present the effectiveness of the Multi-task and Hierarchical Esemble(MT+HE) method. Multi-classifiers(MC) in each level were also proposed for comparison. To run the training, the argument --mc_level can be set as 0-4 and -1 for MC experiments in 0-4 levels and MT+HE experiments in all levels respectively. In addition, the MT training contains two stage Multi-task Learning without consistency loss(MTnc) and Multi-task Learning with consistency loss(MT), of which the MT is trained based on the pretrained MTnc model.

To run the evaluation, the argument --mc_level in run script can be set as any one from 0-4 to test 5 MC models altogether for MC and MC+HE results, or set as -1 to test the target model for MTnc or MT(+HE) results. Particularly for MC evaluation, the PRETRAINED_MODEL_PATH is supposed to be set as the list of 5 model paths in the config file.

Benchmark Experiments

The semantic segmentation bechmark were built with three models PointNet++, PointCNN and DGCNN. They are all conducted via the MT+HE method for hierarchical learning on the Campus3D dataset. To run different models, one can change the argument --model as the indicated model. Following are the reference repository for PyTorch implementation of 3D deep models.

PointNet++ GitHub Link

PointCNN GitHub Link

DGCNN GitHub Link

MODEL ZOO

Models

No. Model Name Method MC Level Training Process Scheduler Download
Link
0 PointNet++ 'pointnet2' MC 0 50 epochs(lr=0.01) cos MC0.t7
1 PointNe++ 'pointnet2' MC 1 50 epochs(lr=0.01) cos MC1.t7
2 PointNet++ 'pointnet2' MC 2 50 epochs(lr=0.01) cos MC2.t7
3 PointNet++ 'pointnet2' MC 3 50 epochs(lr=0.01) cos MC3.t7
4 PointNet++ 'pointnet2' MC 4 50 epochs(lr=0.01) cos MC4.t7
5 PointNet++ 'pointnet2' MTnc -1 50 epochs (lr=0.01) cos pointnet2_MTnc.t7
6 PointCNN 'pointcnn' MTnc -1 50 epochs (lr=0.01) cos pointcnn_MTnc.t7
7 DGCNN 'dgcnn' MTnc -1 50 epochs (lr=0.01) cos dgcnn_MTnc.t7
8 Pointnet++ 'pointnet2' MT -1 50 epochs (lr=0.01) +
20 epochs with
consistency loss (lr=0.01)
cos pointnet2_MT.t7
9 PointCNN 'pointcnn' MT -1 50 epochs (lr=0.01) +
30 epochs with
consistency loss (lr=0.01)
cos pointcnn_MT.t7
10 DGCNN 'dgcnn' MT -1 50 epochs (lr=0.01) +
20 epochs with
consistency loss (lr=0.01)
cos dgcnn_MT.t7

Benchmark Experiments Results

Semantic segmentation benchmarks (mIoU% and OA%) for models with MT+HE method

Benchmark Model C1 C2 C3 C4 C5
OA% PointNet++ 91.4 87.5 86.7 85.0 75.1
OA% PointCNN 88.9 79.3 78.7 76.8 63.8
OA% DGCNN 94.7 90.6 89.1 87.2 81.5
mIoU% PointNet++ 83.8 74.3 58.0 37.1 22.3
mIoU% PointCNN 79.7 61.5 42.8 26.3 15.0
mIoU% DGCNN 89.6 80.1 63.3 43.1 28.4

These results are produced by model No.8, No.9 and No.10.

Hierarchical Learning Experiments Results

Semantic segmentation benchmarks(OA%) for different HL methods with model PointNet++

Method C1 C2 C3 C4 C5
MC 90.8 86.2 84.4 83.6 73.6
MC+HE 91.4 87.4 86.5 84.8 74.9
MTnc 90.6 86.0 85.0 83.1 73.3
MT 91.4 87.4 86.7 84.9 75.2
MT+HE 91.4 87.5 86.7 85.0 75.1

These results are produced by model No.0-4, No.5 and No.8. They demonstrate the effectiveness of the MT+HE method for HL problem. Results with detailed per-class IoU are displayed below.

Semantic segmentation benchmarks(IoU%) for different HL methods with model PointNet++

Granularity Level Class MC MC+HE MTnc MT MT+HE
C1 ground 85.4 86.4 85.3 86.1 86.1
C1 construction 79.9 80.8 79.4 81.4 81.5
C2 natural 81.1 82.4 80.8 82.9 82.9
C2 man_made 58.5 60.9 58.7 58.1 58.5
C2 construction 78.8 80.8 78.5 81.3 81.5
C3 natural 79.2 82.4 80.8 82.9 82.9
C3 play_field 62.9 65.9 56.1 66.5 67.3
C3 path&stair 8.7 8.2 8.7 0.0 0.0
C3 driving_road 58.4 60.6 57.7 58.4 58.5
C3 construction 76.6 80.8 78.2 81.4 81.5
C4 natural 81.0 82.4 80.4 82.9 82.9
C4 play_field 57.3 65.9 54.0 68.2 67.3
C4 path&stair 9.3 8.2 8.7 0.0 0.0
C4 vehicle 16.6 19.4 16.7 9.4 9.9
C4 not vehicle 57.9 59.9 57.2 57.8 57.9
C4 building 76.5 78.2 75.4 78.8 78.8
C4 link 0.0 0.1 0.2 0.0 0.0
C4 facility 0.0 0.0 0.0 0.0 0.0
C5 natural 80.1 82.4 80.5 83.0 82.9
C5 play_field 52.7 65.9 53.5 67.0 67.3
C5 sheltered 10.6 7.9 9.0 0.0 0.0
C5 unsheltered 7.6 7.9 8.3 0.0 0.0
C5 bus_stop 0.0 0.0 0.1 0.0 0.0
C5 car 18.1 19.5 16.7 10.6 9.9
C5 bus 0.0 0.0 0.0 0.0 0.0
C5 not vehicle 58.1 59.9 57.4 57.7 57.9
C5 wall 46.5 47.3 45.8 47.3 47.1
C5 roof 43.1 44.2 41.7 47.4 47.4
C5 link 0.2 0.1 0.4 0.0 0.0
C5 artificial_landscape 0.0 0.0 0.0 0.0 0.0
C5 lamp 0.0 0.0 0.0 0.0 0.0
C5 others 0.0 0.0 0.0 0.0 0.0

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%