-
Notifications
You must be signed in to change notification settings - Fork 0
3D Robotic Mapping Platform
This wiki serves as a guide to building the "3D Robotic Mapping Platform" (3D-ROMAP). Below will discuss the design and implementation of the robot, characterizing the system's electrical and software requirements.
The goal of this project was to produce a robot that was capable of capturing an environment in three-dimensional space on an unmanned ground vehicle (UGV). This project accomplished the mapping requirement using an RGB-D camera and a high speed two-dimensional LiDAR unit. The processing and control of the UGV was accomplished using the NVIDIA Jetson TX1 high-performance embedded platform and the Texas Instruments Tiva C ARM Cortex-M4 microcontroller unit (MCU).
An RGB-D (red, green, blue, and depth) camera is simply a electro-optical sensor capable of both color and depth capture. This is accomplished using various technologies, but is most commonly associated with stereo vision cameras. Stereo vision is simply the use of two cameras to triangulate the frame differences, and use this information for depth data. To illustrate this, consider human eyes; using both eyes, we can easily determine where an object is placed, and use this information to determine how far our arm must reach to accurately grasp the object. If you were to cover one eye, your brain will have half the information necessary to make an accurate measurement. This is the exact purpose of stereo vision, and the technology is accomplished in various ways.
Stereo vision can be categorized in two main technologies: passive and active stereo vision. Passive stereo vision is synonymous with human eyes, since is accomplished using only the passive light entering the camera. As light excites the camera sensor's focal plane array (FPA), the frames of both cameras are sampled and differentiated using a triangulation of data points to produce a depth map. The second technology is active stereo vision and functions in several categories, but for the purposes of this project we will only focus on our chosen technology, structured light active IR (infrared) stereo vision.
Structured light stereo vision is the use of an IR pattern projector and two passive infrared stereo vision cameras. The triangulation of data is accomplished by measuring the distortion of the expected pattern, and mathematically extracting the approximated depth by comparing the distortion to its calibration.