This project showcases real-time point cloud fusion of multiple RGB-D cameras using 2D marker based calibration.
The designed algorithm yielded a 90% reconstruction accuracy compared to manual scans.
PointCloudFusionDemo.mp4
This demo showcases the reconstruction accuracy of the algorithm when deployed to live video. In this demo, two Intel Realsense L515 cameras were used. This application was developed in python using open3D rendering.
MultiCameraFusionDemo.mp4
This demo shows the resulting camera extrinsics obtained from the calibration. This demo was created using Unreal Engine 5, and displayed on the Varjo XR3 headset.
XRLocationDemo.mp4
This project showcases my ongoing work with VeyondMetaverse. For further details or inquiries, feel free to reach out to me at nkarki@torontomu.ca.