OpenCDA is an open co-simulation-based research/engineering framework integrated with prototype cooperative driving automation (CDA; see SAE J3216) pipelines as well as regular automated driving components (e.g., perception, localization, planning, control). It not only enables CDA evaluation in a CARLA + SUMO co-simulation environment but also provides a rich library of source codes of CDA research pipelines.
In collaboration with U.S.DOT CDA Research and the FHWA CARMA Program, OpenCDA, as an open-source project, is designed and built to support early-stage fundamental research for CDA research and development. Through collaboration with CARMA Collaborative, this tool provides a unique capability to the CDA research community and will interface with the CARMA XiL tools being developed by U.S.DOT to support more advanced simulation testing of CDA Features.
The key features of OpenCDA are:
- Research Pipeline : OpenCDA provides rich research pipelines (i.e., open-source codes for basic and advanced CDA modules, such as platooning, cooperative perception).
- Integration: OpenCDA utilizes CARLA and SUMO separately, as well as integrates them together.
- Full-stack Simulation: OpenCDA provides a simple prototype automated driving and cooperative driving platform, all in Python, that contains perception, localization, planning, control, and V2X communication modules.
- Modularity: OpenCDA is highly modularized.
- Benchmark: OpenCDA offers benchmark testing scenarios, benchmark baseline maps, state-of-the-art benchmark algorithms, and benchmark evaluation metrics.
- Connectivity and Cooperation: OpenCDA supports various levels and categories of cooperation between CAVs in simulation. This differentiates OpenCDA from other single vehicle simulation tools.
Users could refer to OpenCDA documentation for more details.
- HD Map manager is online! It currently supports generating rasterization map that includes road topology, traffic light info, and dynamic objects around each cav in real-time. This can be used for RL planning, HD Map learning, scene understanding, etc.
- Our paper OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication has been accepted by ICRA 2022! It utilizes the offline Cooperative Perception(data dumping) function in OpenCDA. Check the benchmark codebase OpenCOOD of OPV2V if interested.
- CARLA 0.9.12 now supported; Different weather conditions are supported.
- Better traffic management supported: users now can set a customized range to background cars.
OpenCDA consists of four major component: Cooperative Driving System, Co-Simulation Tools, Data Manager and Repository, and Scenario Manager.
Check the OpenCDA Introduction for more details.
Note: We continuously improve the performance of OpenCDA. Currently, it is mainly tested in our customized maps and Carla town06 map; therefore, we DO NOT guarantee the same level of robustness in other maps.
We welcome your contributions.
- Please report bugs and improvements by submitting issues.
- Submit your contributions using pull requests. Please use this template for your pull requests.
If you are using our OpenCDA framework or codes for your development, please cite the following paper:
@inproceedings{xu2021opencda,
title={OpenCDA: an open cooperative driving automation framework integrated with co-simulation},
author={Xu, Runsheng and Guo, Yi and Han, Xu and Xia, Xin and Xiang, Hao and Ma, Jiaqi},
booktitle={2021 IEEE International Intelligent Transportation Systems Conference (ITSC)},
pages={1155--1162},
year={2021},
organization={IEEE}
}
The arxiv link to the paper: https://arxiv.org/abs/2107.06260
Also, under this LICENSE, OpenCDA is for non-commercial research only. Researchers can modify the source code for their own research only. Contracted work that generates corporate revenues and other general commercial use are prohibited under this LICENSE. See the LICENSE file for details and possible opportunities for commercial use.
OpenCDA is supported by the UCLA Mobility Lab.
- Dr. Jiaqi Ma (linkedin, UCLA Samueli)