Skip to content

Latest commit

 

History

History
16 lines (8 loc) · 870 Bytes

File metadata and controls

16 lines (8 loc) · 870 Bytes

3D-mapping-and-object-segmentation

Indoor navigation for robots has become a crucial part of their use in such environments. Mapping an environment allows autonomous navigation to detect and avoid obstacles. In this paper, we present a novel approach that utilizes RGB images to segment indoor environments. The paper introduces a method for indoor mapping that can be used for robot navigation by first creating a 3D mesh from Multi-View Stereo (MVS) RGB images and then converting this mesh into a point cloud for environmental segmentation. We carry out experiments to establish a baseline for our method. We present our findings and provide avenues for future work.

Pipeline

In the package Pipeline.png, download to view

Paper

In the package Report.pdf, download to view

Outputs

In the package Output.png, download to view