I've always found Image Processing and Computer Vision applications fascinating at the same time I love watching drone flight videos on youtube. As my interest peaked I had to come through with implementing this project and enhance my theoretical and working knowledge at the same time. For a comprehensive Overview of the project, I suggest you check out the following article,
Let's start with the exciting part, the outcome of the project looks something like this,
! Note unlike other projects this was implemented on a Linux (Ubuntu version 22.04) system
- ROS (Robot Operating System)
Hosts the components
- Mavros
Serves as the communication bridge
- Ardupilot
Controls the drone
- Gazebo
Simulation environment
- darknet_ros package
Yolov3 model for object detection
They are structured in the following manner,
Finally, the underlying communication between nodes is described in the following diagram,
The main components and foundations used in this project were derived from Intelligent Quads Repos my work was just trying to figure out how to integrate each component and put them together to get the desired outcome.
The documentation of this project can be found here
You can either try and explore the Intelligent Quads Repo to customize your implementation.
Nevertheless, for the exact implementation in this project, you can download the different components mentioned earlier from files provided in this repo (any missed files can be derived from the source) and follow the below-given steps,
Create the CMake and XML file for your ROS package or derive and modify the sources.
- Run the runway.lauch launch file, this should load the world in gazebo
Roslauch "relative directory" runway.launch
- Import the drone_with_camera model into the world gazebo simulation.
You should be able to see the drone present on the runway
- Initialize ardupilot and set mode to guided before waiting a few seconds for it to load completely
Roslauch "relative directory" apm.launch
You can now provide flight commands through ardupilot to the drone and watch it move around and also view the camera feed.
- You can now run the darknet package
Roslaunch darknet daknet.launch
The simulation is complete you should now see the objects being detected over the camera feed.
The documentation also contains multiple drone swarm coordination but is not implemented completely.