Repository to store the conditional imitation learning based AI that runs on carla. The trained model is the one used on "CARLA: An Open Urban Driving Simulator" paper.
tensorflow_gpu 1.1 or more
numpy
scipy
carla 0.8.2
PIL
Basically run:
$ python run_CIL.py
Note that you must have a carla server running .
To check the other options run
$ python run_CIL.py --help
The dataset can be downloaded here 24 GB
The data is stored on HDF5 files.
Each HDF5 file contains 200 data points.
The HDF5 contains two "datasets":
'images_center':
The RGB images stored at 200x88 resolution
'targets':
All the controls and measurements collected.
They are stored on the "dataset" vector.
- Steer, float
- Gas, float
- Brake, float
- Hand Brake, boolean
- Reverse Gear, boolean
- Steer Noise, float
- Gas Noise, float
- Brake Noise, float
- Position X, float
- Position Y, float
- Speed, float
- Collision Other, float
- Collision Pedestrian, float
- Collision Car, float
- Opposite Lane Inter, float
- Sidewalk Intersect, float
- Acceleration X,float
- Acceleration Y, float
- Acceleration Z, float
- Platform time, float
- Game Time, float
- Orientation X, float
- Orientation Y, float
- Orientation Z, float
- High level command, int ( 2 Follow lane, 3 Left, 4 Right, 5 Straight)
- Noise, Boolean ( If the noise, perturbation, is activated, (Not Used) )
- Camera (Which camera was used)
- Angle (The yaw angle for this camera)
If you use the conditional imitation learning, please cite our ICRA 2018 paper.
End-to-end Driving via Conditional Imitation Learning.
Codevilla,
Felipe and Müller, Matthias and López, Antonio and Koltun, Vladlen and
Dosovitskiy, Alexey. ICRA 2018
[PDF]
@inproceedings{Codevilla2018,
title={End-to-end Driving via Conditional Imitation Learning},
author={Codevilla, Felipe and M{\"u}ller, Matthias and L{\'o}pez,
Antonio and Koltun, Vladlen and Dosovitskiy, Alexey},
booktitle={International Conference on Robotics and Automation (ICRA)},
year={2018},
}