Skip to content

This repository highlights the integration of neural network-based control with PID and MPC approaches in the AirSim simulator to enhance steering inputs for autonomous vehicles. Employing imitation learning and a hybrid neural network architecture, the project aims to create a robust and unbiased model for improved autonomous vehicle control.

Notifications You must be signed in to change notification settings

Rishikesh-Jadhav/Adaptive-Neural-Network-Based-Control-of-Autonomous-car-in-AirSim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Adaptive-Neural-Network-Based-Control-in-AirSim-Simulator

Airsim Environment

Overview

This project focuses on enhancing autonomous vehicle control using Neural Network-Based Control in the AirSim simulator. The primary goal is to improve steering inputs for the vehicle by augmenting conventional PID and Model Predictive Controller (MPC) approaches with adaptive multi-layered neural networks. The project showcases a method that imitates the behavior of PID or MPC controllers rather than human inputs, resulting in a more robust and bias-free model.

Project Members

Abstract

The project employs a systematic approach to integrate imitation learning enhanced by a Model Predictive Controller for autonomous driving in a simulated environment. The hybrid neural network architecture combines convolutional and dense layers, demonstrating efficacy in synthesizing visual and contextual information. Techniques such as dropout, data augmentation, and specific Region of Interest (ROI) selection address challenges inherent in real-world applications and diverse dataset characteristics.

Key Features

  • Simulation Environment: Utilized the AirSim simulator for autonomous vehicle control simulation.

  • Data Collection: Gathered sensor data from the simulator, including front-facing camera images and state variables like steering, throttle, speed, and brake.

  • Neural Network Training: Implemented a multi-layered neural network architecture suitable for control tasks, providing a jittery-free output.

  • Evaluation Metrics: Assessed model performance using Mean Squared Error (MSE) and qualitative indicators for real-world scenarios.

Methodology

The project employs a systematic approach to augment conventional PID and MPC controllers with adaptive multi-layered neural networks. The neural network architecture combines convolutional layers for image analysis with additional numerical data, resulting in improved steering inputs. The method imitates the behavior of controllers rather than human inputs, ensuring unbiased data distribution and jitter-free outputs.

Experiments

  • Data Collection Strategies: Explored data collection using LIDAR-based obstacle detection, and waypoint-driven scenarios with PID and MPC controllers.

    Data Collected: https://drive.google.com/drive/folders/1_nsHW8zgRbLLXc5W6bpPYwH0OgZFlNOx

  • Fine-Tuning and Optimization: Experimentation with dropout values, ROI selection, and optimization using the Nadam optimizer to enhance model performance.

Limitations

  • Simulation-to-reality gap and environmental variability.
  • Real-time adaptability constraints.
  • Dependency on vision-based perception.

Conclusion

This project successfully integrates imitation learning with Model Predictive Control, demonstrating a comprehensive approach to neural network architecture, training, and deployment. Challenges identified provide valuable insights for future exploration. The robustness and adaptability of the model, as well as its real-world limitations, contribute to ongoing research in deep learning for autonomous vehicles.

Testing Results

Click to watch the videos

Gradual Turns

Click to watch the video

Sharp Turns

Click to watch the video

MPC Output - Scene 1

Click to watch the video

MPC Output - Scene 2

Click to watch the video

How to Use

Environment Setup

  1. Clone the repository.
  2. Install Anaconda with Python 3.5 or higher.
  3. Install CNTK or install Tensorflow.
  4. Install h5py.
  5. Install Keras and configure the Keras backend to work with TensorFlow (default) or CNTK.
  6. Install AzCopy. Be sure to add the location for the AzCopy executable to your system path.
  7. Install the other dependencies. From your Anaconda environment, run "InstallPackages.py" as root or administrator. This installs the following packages into your environment:
    • jupyter
    • matplotlib v. 2.1.2
    • image
    • keras_tqdm
    • opencv
    • msgpack-rpc-python
    • pandas
    • numpy
    • scipy

Running the package

  1. To run the package and create waypoints for the Model predictive controller, run python waypoints.py and control the car manually to make it follow complex and straight paths.
  2. Run python client_controller.py to run the Model predictive controller. Start recording while this command runs to collect the dataset.
  3. Open DataExplorationAndPreparation.ipynb and run all the cells.
  4. Open TrainModel.ipynb and run all the cells to train the model. Modify the Region of Interest and various hyperparameters according to the dataset size and needs.
  5. Run drive_model.py to test the model.

References

  1. Airsim: https://github.com/microsoft/AutonomousDrivingCookbook/tree/master/AirSimE2EDeepLearning
  2. Model predictive controller: https://github.com/asap-report/carla/tree/racetrack/PythonClient/racetrack

About

This repository highlights the integration of neural network-based control with PID and MPC approaches in the AirSim simulator to enhance steering inputs for autonomous vehicles. Employing imitation learning and a hybrid neural network architecture, the project aims to create a robust and unbiased model for improved autonomous vehicle control.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages