This is Non-Official implementation of CPGNet.Just a simple try and just simple reproduction based on SMVF. 😂😂😂- Here is the official Repo.
- Remove transformation consistency loss due to its large training burden. (~2× GPU consumption & ~2× training time boost)
- CutMiX data augmentation now available.
- CosineAnnealingWarmUpRestarts look better than StepLR.
- Now, there is not much performance gap between wce and ohem.
Please refer to SMVF repo. Note: Make sure deep_point is installed.
Download SemanticKITTI from official web. Download Object_Bank from SMVF for CutMix.
### Multi-gpus ###
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py --config config/wce.py
### Single-gpu ###
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 train.py --config config/wce.py
### Multi-gpus ###
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 evaluate.py --config config/wce.py --start_epoch 0 --end_epoch 49
### Single-gpu ###
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 evaluate.py --config config/wce.py --start_epoch 0 --end_epoch 49
python find_best_metric.py --name wce
Models have been uploaded to this Google Drive folder.
CPGNet | Loss | Batch_Size * GPUS | mIoU |
---|---|---|---|
Our Reproduced | WCE (stage=1 w/o CutMix) | 6 * 2 (FP16 on 3090) | 58.6 |
Our Reproduced | WCE (stage=1) | 6 * 2 (FP16 on 3090) | 62.4 |
Our Reproduced | WCE (stage=2) | 6 * 2 (FP16 on 3090) | 64.9 |
Paper Reported | WCE (stage=1) | 62.5 | |
Paper Reported | WCE (stage=2) | 65.9 |
Note:
- Our model is trained using 1/4 of the data. See here of code.
- We did not use TTA, which actually slightly improved performance by about 1.0%. If want to enable it, comment here and here.
Below are known issues listed:
- Over-fitting on the validation set, the model with 64.9mIoU on the validation set achieved only 61.8mIoU on the CodaLab online test set. (Probably because we only use 1/4 of the data for training)
It should be considered to cite:
@inproceedings{li2022cpgnet,
title={CPGNet: Cascade Point-Grid Fusion Network for Real-Time LiDAR Semantic Segmentation},
author={Li, Xiaoyan and Zhang, Gang and Pan, Hongyu and Wang, Zhenhua},
booktitle={2022 IEEE International Conference on Robotics and Automation (ICRA)},
year={2022},
organization={IEEE}
}