Skip to content

[ITSC 23] Official codebase for the paper 'Radar Enlighten the Dark: Enhancing Low-Visibility Perception for Automated Vehicles with Camera-Radar Fusion

License

Notifications You must be signed in to change notification settings

PurdueDigitalTwin/REDFormer

Repository files navigation

Radar Enlighten the Dark: Enhancing Low-Visibility Perception for Automated Vehicles with Camera-Radar Fusion

python pytorch codecov
MIT License PRs contributors


arch

rad


Table of Contents


About

In this work, we propose a novel transformer-based 3D object detection model ``REDFormer'' to tackle low visibility conditions, exploiting the power of a more practical and cost-effective solution by leveraging bird's-eye-view camera-radar fusion. Using the nuScenes dataset with multi-radar point clouds, weather information, and time-of-day data, our model outperforms state-of-the-art (SOTA) models on classification and detection accuracy. Finally, we provide extensive ablation studies of each model component on their contributions to address the above-mentioned challenges. Particularly, it is shown in the experiments that our model achieves a significant performance improvement over the baseline model in low-visibility scenarios, specifically exhibiting a 31.31% increase in rainy scenes and a 46.99% enhancement in nighttime scenes.


Getting Started

Installation

Please refer to our installation guide for details.

Data Preparation

Download nuscenes full dataset

Please refer to nuScenes official website to download nuScenes v1.0 full dataset and CAN bus expansion. Nuscenes Official Website

Generating annotation files

bash scripts/create_data.sh

Download the checkpoint files

Please put the 'bevformer_raw.pth' to 'ckpts/raw_model' and put 'R101-DCN' to folder 'ckpts'.

Backbone Download
R101-DCN model download
bevformer_raw model download
Model Download
Our REDFormer model download

Folder structure

REDFormer
├── ckpts        # folder for checkpoints
│   ├── raw_model/
│   │   └── bevformer_raw.pth
│   ├── r101_dcn_fcos3d_pretrain.pth
│   └── redformer.pth
├── data         # folder for NuScenes dataset
│   ├── nuscenes/
│   │   ├── full/
│   │   │   ├── can_bus/
│   │   │   ├── maps/
│   │   │   ├── samples/
│   │   │   ├── sweeps/
│   │   │   ├── v1.0-test/
│   │   │   ├── v1.0-trainval/
│   │   │   ├── nuscenes_infos_ext_train.pkl
│   │   │   ├── nuscenes_infos_ext_val.pkl
│   │   │   ├── nuscenes_infos_ext_rain_val.pkl
│   │   │   └── nuscenes_infos_ext_night_val.pkl
├── projects/
├── scripts/
├── tools/
├── environment.yml
├── LICENSE
├── README.md
├── scripts
├── setup.py
└── tools

Training

bash scripts/train.sh

Test

bash scripts/test.sh

If you want to test the performance on rain or night scenes, please go the config file Here (projects/configs/redformer/redformer.py) and modify the value of environment_test_subset.


Citatation

If you find REDFormer useful, you are highly encouraged to cite our paper:

@article{cui_radar_2023,
	title = {Radar {Enlighten} the {Dark}: {Enhancing} {Low}-{Visibility} {Perception} for {Automated} {Vehicles} with {Camera}-{Radar} {Fusion}},
	shorttitle = {{REDFormer}},
	doi = {10.48550/arXiv.2305.17318},
	journal = {IEEE International Conference on Intelligent Transportation Systems (ITSC)},
	author = {Cui, Can and Ma, Yunsheng and Lu, Juanwu and Wang, Ziran},
	year = {2023},
}

License

Distributed under the MIT License. See LICENSE for more information.


Acknowledgement

We attribute our work to the following inspiring open source projects:

About

[ITSC 23] Official codebase for the paper 'Radar Enlighten the Dark: Enhancing Low-Visibility Perception for Automated Vehicles with Camera-Radar Fusion

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published