in this part, you will
- download openlane and waymo data
- compile waymo cli and unpack waymo tfrecords
- put them together
- generate pkl and bin for training and evaluation
You can download openlane-v1 dataset from here or with opendatalab cli at opendatalab; (you will need about 100~200GB of storage)
Waymo Open Dataset v1.4.2 can be downloaded here and you can use gcloud cli to download it. (you will need about 1TB of storage)
you can put them wherever you like, we will do the symbolic link later.
just download the data and unzip them and there should be nothing you need to specifically take care of.
the directory structure should look like:
openlane
├── images # openlane only contain front view so has no suffix
├── lane3d_1000
└── ...
we have precompiled a waymo metrics computation cli in RepVF/tools/waymo_utils, you may want to test it with:
RepVF/tools/waymo_utils/compute_detection_metrics_main
if it doesn't work, please refer to this guide for compiling one.
here we need to convert Waymo 1.4.2 dataset consisting of tfrecord files into individual files, make sure you have enough space left on your disk (about identical size to waymo_format).
Install some package first(According to https://github.com/waymo-research/waymo-open-dataset/blob/master/tutorial/tutorial.ipynb)
sudo apt-get install openexr
sudo apt-get install libopenexr-dev
Then Run the unpack script, and wait:
python RepVF/tools/waymo_adapt_tools.py --load_dir path-to-waymo/waymo_format --save_dir path-to-save/openlane_format --workers 16 --verbose True
in our practice, you can use up to 80 workers or more.
directory structure before unpack conversion:
waymo
└── waymo_format
├── training
│ ├── segment-xxx.tfrecord
│ └── ...
└── validation
├── segment-xxx.tfrecord
└── ...
after unpack conversion:
waymo
└── openlane_format
├── images_i # (i=0,1,2,3,4), Waymo has 5 views of camera instead of 6
│ ├── training
│ ├── segment-xxx
│ ├── (timestamp).jpg
│ └── ...
│ ├── segment-xxx
│ └── ...
│ └── validation
├── velodyne
│ ├── training (timestamp.bin)
│ └── validation
└── detection3d_1000
├── training
└── validation
we suggest symbolic link all the data in the following place to ease specifying data root:
repvf_workspace
├── mmdetection3d
├── RepVF
└── data
└── waymo
├── waymo_format
├── openlane_format # link
├── images_i
├── detection_1000
├── lane3d_1000 # link
└── lane3d_300 # link
└── ... # other pkl or bin files that we will generate
to achieve this, you need to
- copy or softlink openlane format waymo data as data/waymo/openlane_format
- copy or softlink lane3d_1000 and lane3d_300 under data/waymo/openlane_format/
for the complete set, to generate .pkl and .bin while filtering empty samples, this by default combines detection_1000 with lane3d_1000:
python RepVF/tools/waymo_adapt_tools/generate_pkl_waymo.py --workers 80 --filter_empty_gt --suffix filtered
python RepVF/tools/waymo_adapt_tools/create_waymo_gt_bin.py --pkl-path data/waymo/openlane_format/validation_filtered.pkl --bin-name cam_gt_filtered.bin
for the 30% subset, lane3d_300 is used:
python RepVF/tools/waymo_adapt_tools/generate_pkl_waymo.py --workers 80 --filter_empty_gt --suffix filtered_300 --lane3d lane3d_300
python RepVF/tools/waymo_adapt_tools/create_waymo_gt_bin.py --pkl-path data/waymo/openlane_format/validation_filtered_300.pkl --bin-name cam_gt_filtered_300.bin
if you need no filteration:
python RepVF/tools/waymo_adapt_tools/generate_pkl_waymo.py --workers 80
python RepVF/tools/waymo_adapt_tools/create_waymo_gt_bin.py
the .pkl file is for mmdetection3d dataset and the .bin file is required by waymo metrics computation.
repvf_workspace
├── mmdetection3d
├── RepVF
└── data
└── waymo
├── waymo_format
├── openlane_format
├── training_filtered.pkl
├── validation_filtered.pkl
├── cam_gt_filtered.bin
├── training_filtered_300.pkl
├── validation_filtered_300.pkl
└── cam_gt_filtered_300.bin
you can download our demo data here, which contains one segment for debug purpose.