Change the repository type filter
All
Repositories list
51 repositories
FusionSense
PublicIntegrates the vision, touch, and common-sense information of foundational models, customized to the agent's perceptual needs.CityWalker
PublicCityWalker: Learning Embodied Urban Navigation from Web-Scale VideosUNav-Server
Publicinsta360_ros_driver
PublicA ROS driver for Insta360 cameras, enabling real-time image capture, processing, and publishing in ROS environments.MSG
Public[NeurIPS2024] Multiview Scene Graph (topologically representing a scene from unposed images by interconnected place and object nodes)LoQI-VPR
PublicDPVO
PublicSeeDo
PublicHuman Demo Videos to Robot Action PlansNYC-Event-VPR
PublicLUWA
Publicvis_nav_player
PublicEgoPAT3Dv2
Public[ICRA 2024] Official Implementation of EgoPAT3Dv2: Predicting 3D Action Target from 2D Egocentric Vision for Human-Robot InteractionOcc4cast
PublicOcc4cast: LiDAR-based 4D Occupancy Completion and Forecastingfolder2hdf5
PublicSSCBench
PublicSSCBench: A Large-Scale 3D Semantic Scene Completion Benchmark for Autonomous Drivingrealsense_ROS2_interface
Publicjoy_hand_eye_ROS2
Publicur_ros2
PublicThis package is some additions to the official UR ROS2 driver that enables teleoperation and some more visualization. Developed at AI4CE Lab at NYU.xarm_ros2
Publicusbcam_ROS2_interface
Publicai4ce_robot_ROS2_drivers
PublicDeepMapping
Public[CVPR2019 Oral] Self-supervised Point Cloud Map Estimation- This repo contains all the ROS2 packages developed at AI4CE lab for interfacing with various specialized sensors
UNav_demo
PublicSPARE3D
Public[CVPR2020] A Dataset for SPAtial REasoning on Three-View Line DrawingsNYC-Indoor-VPR
PublicDeepMapping2
Public[CVPR2023] DeepMapping2: Self-Supervised Large-Scale LiDAR Map OptimizationLLM4VPR
PublicCan multimodal LLM help visual place recognition?MARS
Public[CVPR2024] Multiagent Multitraversal Multimodal Self-Driving: Open MARS Dataset