Skip to content

Indoor navigation for robots has become a crucial part of their use in such environments. Mapping an environment allows autonomous navigation to detect and avoid obstacles.We carry out experiments to establish a baseline for our method. We present our findings and provide avenues for future work.

Notifications You must be signed in to change notification settings

nishantpandey4/3D-mapping-and-object-segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 

Repository files navigation

3D-mapping-and-object-segmentation

Indoor navigation for robots has become a crucial part of their use in such environments. Mapping an environment allows autonomous navigation to detect and avoid obstacles. In this paper, we present a novel approach that utilizes RGB images to segment indoor environments. The paper introduces a method for indoor mapping that can be used for robot navigation by first creating a 3D mesh from Multi-View Stereo (MVS) RGB images and then converting this mesh into a point cloud for environmental segmentation. We carry out experiments to establish a baseline for our method. We present our findings and provide avenues for future work.

Pipeline

In the package Pipeline.png, download to view

Paper

In the package Report.pdf, download to view

Outputs

In the package Output.png, download to view

About

Indoor navigation for robots has become a crucial part of their use in such environments. Mapping an environment allows autonomous navigation to detect and avoid obstacles.We carry out experiments to establish a baseline for our method. We present our findings and provide avenues for future work.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published