Self-Driving Car that implements Line detection, obstacle avoidance, and obstacle recognition features.
Keywords: Self-Driven, RaspberryPi, Arduino, C/C++, OpenCV, Computer Vision, IoT
- Description
- Car Parts
- Programming
- Scheme of System and Connections
- Line-Detection
- Obstacle Avoidance and Road Sign Recognition
In this repository, I will share my journey of building a minimal self driving car. This project was inspired by my dream of buidilding a full-scale self driving car using only cameras to make it accessible for more people in the future. To that point, only an 8 Mp Raspberry-Camera was used for real-time image processing, and no sensors were used for car navigation or data gathering. The car has following features. Features of the car:
- Line Detection
- Obstacle Avoidance
- Road Sign Recognition
Here is the list of parts and their links:
- Raspberry Pi 4 Model B
- Ardunio UNO R3
- H-Bridge Motor Drive Controller
- Raspberry Pi Camera Module V2 8Mp
- Solar Power Bank
- Robot Smart Car Chassis Kits
- Mini HDMI to HDMI cable
- USB C Cable
- Pi Camera Cable
- Ribbon Cables
Main programming language of the project is C++
and arduino programming(C language
). For the image capturing and processing, as well as Computer Vision algorithms, the open source OpenCV
C++ library was used.
Firstly, the Ardunio was programmed and tested without any master device(in this project Raspberry Pi). The motion of the car is achieved by changing signal values (0 to 255 - 255 highest voltage). Arduino pins are declared and speed of the car for each instruction defined.
//Left side of Motors
const int EnL = 5; # Pin Numbers on Arduino
const int HighL = 7;
const int LowL = 11;
//Right side of Motors
const int EnR = 6;
const int HighR = 9;
const int LowR = 8;
//Forward Motion
void Forward(){
digitalWrite(HighL, HIGH);
digitalWrite(LowL, LOW);
digitalWrite(HighR, HIGH);
digitalWrite(LowR, LOW);
analogWrite(EnL, 250);
analogWrite(EnR, 250);
}
Since there is no steering in the car, steering of the car is also achieved by giving reverse voltage values on each side of car motors. Example:
//Steering to right softly
void Right_soft(){
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, LOW);
digitalWrite(LowR, HIGH);
analogWrite(EnR,255);
analogWrite(EnL,255);
delay(1000);
digitalWrite(HighL, LOW);
digitalWrite(LowL, HIGH);
digitalWrite(HighR, HIGH);
digitalWrite(LowR, LOW);
analogWrite(EnR,255);
analogWrite(EnL,255);
delay(200);
}
After defining the motion, raspberryPi was connected booted and configured (command list) for SSH connection, C++ ide setups and Camera setups. For all image capturing and processingi OpenCV was the primary library and for any data manupulation and processing stl containers was used Standard Template Library
-
Example Code for Camera Setup and Region of Interest:
void Setup (int argc, char **argv, RaspiCam_Cv &Camera){ Camera.set( CAP_PROP_FRAME_WIDTH,("-w", argc, argv, 400)); Camera.set( CAP_PROP_FRAME_HEIGHT, ("-h", argc, argv, 240)); Camera.set( CAP_PROP_BRIGHTNESS, ("-br", argc, argv, 50)); Camera.set( CAP_PROP_CONTRAST, ("-co", argc, argv, 50)); Camera.set( CAP_PROP_SATURATION, ("-sa", argv, argc, 50)); Camera.set( CAP_PROP_GAIN, ("-g", argc, argv, 50)); Camera.set( CAP_PROP_FPS, ("-fps", argc, argv,0)); } void RegionofInterest(){ line(Frame,Source[0], Source[1], Scalar(0,0,255), 1); line(Frame,Source[1], Source[3], Scalar(0,0,255), 1); line(Frame,Source[3], Source[2], Scalar(0,0,255), 1); line(Frame,Source[2], Source[0], Scalar(0,0,255), 1); //Perspective transformation of Region of interest Matrix = getPerspectiveTransform(Source, Destination); warpPerspective(Frame,FramePerspective, Matrix, Size(400,240)); }
As can be seen from the circuit schemematic below, the connection between motors and arduino is made through an H-Bridge in order to control the left and right motors separately. H-Bridge is connected to the power supply, arduino and motors. The RaspberryPi is used as master device and the arduino as a sleeve device, so the Raspberry Pi is connected to the power supply and arduino which sends the changing signal through declared and connected pins.
The lane detection pipeline steps:
- Pre-processing images using grayscale
- Applying canny edge detection to the image
- Applying masking region to the image
- Applying Hough transform to the image
- Extrapolating the lines found in the hough transform to construct the left and right lane lines
- Adding the extrapolated lines to the input image
- Finding the center of lines for navigation
Final output of this process detects line and centers the car navigation in the lines. Since the algorithm uses the density of lines in white, line density is higher in the lines end, so the line ends can be detected. Also the car can navigate and center itself based on the positive and negative values of difference between line center and frame-center
- Computer Vision: For the obstacle avoidance, cascade classifier machine learning algorithm was used.
- Cascade Classifier: Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones in their paper, “Rapid Object Detection using a Boosted Cascade of Simple Features” in 2001. It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images. For more information
Model was trained with 50 positive (from different angles) and 300 negative images of route. Image size was chosen to be 42x32 and OpenCV library was used.
Positive Sample | Negative Sample |
---|---|
Overall, it was a great experience to start from scratch and build everything by myself. I have learned a lot and practiced a lot. What can be improved in this project is to try different Computer Vision Models with more capable devices. Because, even though raspberry pi is a great mini-computer, it has its limitations in the area (computationaly intensive processes).