Faraway-Frustum: Dealing with LiDAR Sparsity for 3D Object Detection using Fusion
This work improves the detection of faraway objects using a frustum-based lidar/camera fusing strategy.
Paper in ArXiv: https://arxiv.org/abs/2011.01404 (accepted by ITSC2021)
Official KITTI results: http://www.cvlibs.net/datasets/kitti/eval_object_detail.php?&result=48cc1c0c27874e2cc19cbcc76654e9a01c5403a0
There are two scripts for running this program. Each script requires a different version of tensorflow
. Check the python script files for version detail.
We recommend to use anaconda
to manage the tensorflow environment. You can use the following commands to configure your environment:
conda create -n {your environment name} tensorflow-gpu={a specific version} python=3.7
Then anaconda
will solve the dependencies automatically for you. (Make sure you have successfully installed the NVIDIA driver.)
You also need to install the additional standard python packages listed in requirements.txt
.
Note: there are additional instructions inside the python scripts of step 1 and step 2. Do check them.
-
Download the pre-trained Mask-RCNN model (link) and put it into folder
detectors/mask_rcnn/
. -
Prepare Kitti Dataset: download Kitti Dataset and arrange it as follows. In Kitti,
training
has 7481 samples andtesting
has 7518 samples.├── testing │ ├── calib │ ├── image_2 │ └── velodyne └── training ├── calib ├── image_2 ├── label_2 └── velodyne
-
Run stage one of 2D detection and save results: execute the script
step1_save_2d_results.py
to obtain the 2D detection result (including boxes, masks, labels, scores). It will be saved as pickle file. You need to specify the path to Kitti dataset--path_kitti
and the path to store the 2D detection results--path_result
. -
Download the trained NN models for faraway pedestrian/car position detection/refinement in the frustum pointcloud. Here is the link to download the models: NN models - Google Drive
-
Run stage two of frustum-projection and 3D box estimation: execute the script
step2_get_kitti_results.py
to obtain the final results in Kitti txt format. It will read the pickle files obtained in previous step and generate final results in the same directory. Again, you need to specify the path to Kitti dataset--path_kitti
and the path to store the 2D detection results--path_result
. They should be the same as in the previous step. You also need to specify the path to trained NN models. See additional instruction in thestep2_get_kitti_results.py
.
We provide the code to calculate the detection results in the paper. All the code for result calculation can be found in the evaluation
folder. To be noticed, the code is based on either C++ or Python.
We also provide the TXT files of both ground truth labels and detection results of our method (on validation set). You can use them in result
folder for evaluation test:
-
Files in
result\label\val
are from the ground truth labels on the validation subset of KITTI dataset. -
Files in
result\ours_pedestrian
are pedestrian detection results on the validation subset of our method.
(box\val
andmask\val
represent our model using box-frustum and mask-frustum respectively, which is same for car results below) -
Files in
result\ours_car
are car detection results on the validation subset of our method.
There are 3 types of evaluation results, which can be obtained as explained below:
-
Open
average_iou.py
and do:
Give the correct PATH (e.g.\result\ours_pedestrian\mask\val
) to your detection result files and the PATH (e.g.\result\label\val
) to the corresponding label files inline 157-160
.
Define the class (1
for pedestrian and0
for car) inline 171
. -
Run
average_iou.py
and get results.
-
Open
data_process.py
to process raw detection result files and corresponding KITTI label files:
Give the PATH (e.g.\result\ours_pedestrian\mask\val
) to detection result files (e.g 000000.txt ...) and the PATH (e.g.\result\label\val
) to corresponding KITTI label files (e.g 000000.txt ...) inline 455-456
andline 464-465
.
Givefuction='eval_sub'
inline 404
and then run the code to extract the sequential detection result files (and sequential label files) for faraway objects. -
Open
mAP_toolkit/cpp/evaluate_object.cpp
and revise following lines: change the number (e.g.3756) inline 35
to the max number of the sequential result files (e.g. if you have following result(label) files: 000000.txt,...,000057.txt, you will change the number to 57).
Useline 44-46
and comment outline 48-50
.
Useline 61
and change the IoU threshold, and comment outline 60
.
Changeline 783-784
to your own root PATH. -
Compile the
mAP_toolkit/cpp/evaluate_object.cpp
:
Useg++ -O3 -DNDEBUG -o test evaluate_object.cpp
or useCMake
and the provided'CMakeLists.txt'
. -
Give files for evaluation:
Copy your sequential label files to.../cpp/label_2/
.
Copy your sequential detection result files to.../cpp/results/dt/data/
. -
Run the compiled C++ file:
Open the Terminal Window in/cpp
and run as follow:./test dt
. -
Calculate the mAP for faraway objects:
Run.../cpp/calculate_mAP_faraway.py
to print final mAP for 3D/BEV faraway object detection.
-
Open
data_process.py
to process raw detection result files and corresponding KITTI label files:
(Note: if you want to evaluate the given detection results inresult
folder, you can skip the following * steps since the given results of our method inresult
are already fused with an state-of-the-art detector(PV-RCNN). If you want to use the raw results generated bystep2_get_kitti_results.py
and to combine an another detector, please do * steps. )
* Give the PATH to our raw detection result files (e.g 000000.txt ...) and the PATH to the state-of-the-art detector's results (e.g 000000.txt ...) inline 417-418
.
* Givefuction='fuse_result'
inline 404
and then run the code to generate our detection result files by fusing our faraway object results with state-of-the-art detector's results.
Give the PATH (e.g.\result\ours_pedestrian\mask\val
) to our detection result files (or PATH (e.g.\result\label\val
) to corresponding KITTI label files) inline 435-436
.
Givefuction='eval_val'
inline 404
and then run the code to change the detection result files (or label files) to sequential result files (or sequential label files). -
Open
mAP_toolkit/cpp/evaluate_object.cpp
and revise following lines: change the number (e.g.3756) inline 35
to the max number of the sequential result files (e.g. if you have following result(label) files: 000000.txt,...,000057.txt, you will change the number to 57).
Useline 48-50
and comment outline 44-46
.
Useline 60
and and comment outline 61
.
Changeline 783-784
to your own root PATH. -
Compile the
mAP_toolkit/cpp/evaluate_object.cpp
:
Useg++ -O3 -DNDEBUG -o test evaluate_object.cpp
or useCMake
and the provided'CMakeLists.txt'
. -
Give files for evaluation:
Copy your sequential label files to.../cpp/label_2/
.
Copy your sequential detection result files to.../cpp/results/dt/data/
. -
Run the compiled C++ file:
Open the Terminal Window in/cpp
and run as following:./test dt
. -
Calculate the mAP for faraway objects:
Run.../cpp/calculate_mAP.py
to print final mAP (easy, mod, hard) for 3D/BEV object detection.
The mask-RCNN part in our work was based on the implementation: https://github.com/matterport/Mask_RCNN.
- Dongfang Yang: yang.3455@osu.edu
- Haolin Zhang: zhang.10749@osu.edu
- Ekim Yurtsever: yurtsever.2@osu.edu