Skip to content

Latest commit

 

History

History
111 lines (51 loc) · 7.04 KB

Robotics.md

File metadata and controls

111 lines (51 loc) · 7.04 KB

Survey

  1. (2023.12) Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis [Paper]
  2. (2024.4) What Foundation Models can Bring for Robot Learning in Manipulation : A Survey [Paper]
  3. (2024.8) Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations [Paper]
  4. (2024.8) A Survey of Embodied Learning for Object-Centric Robotic Manipulation [Paper]
  5. (2024.8) Embodied-AI with large models: research and challenge [Paper] $\text{[In Chinese]}$

Simulation Platform

  1. (2020.9 & 2022.11) Robosuite: A Modular Simulation Framework and Benchmark for Robot Learning [Paper] [Project]
  2. (2021.3) SoftGym: Benchmarking Deep Reinforcement Learning for Deformable Object Manipulation [Paper] [Project]
  3. (2023.3) FluidLab: A Differentiable Environment for Benchmarking Complex Fluid Manipulation [Paper] [Project]
  4. (2024.2) Genie: Generative Interactive Environments [Paper] [Project]
  5. (2024.6) RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots [Paper] [Project]

Physics + Robotic

  1. (2019.6) DensePhysNet: Learning Dense Physical Object Representations via Multi-step Dynamic Interactions [Paper] [Project]

  2. (2023.4) 3D-IntPhys: Towards More Generalized 3D-grounded Visual Intuitive Physics under Challenging Scenes [Paper]

  3. (2023.9 & 2024.3) PHYSOBJECTS: Physically Grounded Vision-Language Models for Robotic Manipulation [Paper] [Project]

  4. (2024.7) RoboPack: Learning Tactile-Informed Dynamics Models for Dense Packing [Paper] [Project]

  5. (2024.7) TAPVid-3D: A Benchmark for Tracking Any Point in 3D [Paper] [Project]

Foundation models / VLA

  1. (2023.10 & 2024.6) RT-X Model: Robotic Learning Datasets and RT-X Models [Paper] [Project]

  2. (2023.12) ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation [Paper] [Project]

  3. (2024.3) ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models [Paper]

  4. (2024.5) Self-Corrected Multimodal Large Language Model for End-to-End Robot Manipulation [Paper]

  5. (2024.5) SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation [Paper] [Project]

  6. (2024.5) Octo: An Open-Source Generalist Robot Policy [Paper] [Project]

  7. (2024.6) RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation [Paper] [Project]

  8. (2024.6) LLaRA: Supercharging Robot Learning Data for Vision-Language Policy [Paper] [Project]

  9. (2024.6) ManiWAV: Learning Robot Manipulation from In-the-Wild Audio-Visual Data [Paper] [Project]

  10. (2024.6) OpenVLA: An Open-Source Vision-Language-Action Model [Paper] [Project]

  11. (2024.8) Actra: Optimized Transformer Architecture for Vision-Language-Action Models in Robot Learning [Paper]

  12. (2024.8) In-Context Imitation Learning via Next-Token Prediction [Paper] [Project]

Reinforcement Learning

  1. (2024.6) MEReQ: Max-Ent Residual-Q Inverse RL for Sample-Efficient Alignment from Intervention [Paper]

Energy Based Learning

  1. (2021.9) Implicit Behavioral Cloning [Paper]

Transformer Based Learning

  1. (2022.9 & 2022.11) PERCEIVER-ACTOR: A Multi-Task Transformer for Robotic Manipulation [Paper]

  2. (2024.6) RVT-2: Learning Precise Manipulation from Few Demonstrations [Paper] [Project]

Diffusion Based Learning

  1. (2023.1 & 2023.3) Imitating Human Behaviour with Diffusion Models [Paper] [Project]
  2. (2023.8 & 2024.5) Composable Part-Based Manipulation [Paper] [No code yet!]
  3. (2023.12) ChainedDiffuser: Unifying Trajectory Diffusion and Keypose Prediction for Robotic Manipulation [Paper] [Project]
  4. (2024.1 & 2024.5) DiffClone: Enhanced Behaviour Cloning in Robotics with Diffusion-Driven Policy Learning [Paper] [Project]
  5. (2024.2) 3D Diffuser Actor: Policy Diffusion with 3D Scene Representations [Paper] [Project]
  6. (2024.3) Diffusion Policy: Visuomotor Policy Learning via Action Diffusion [Paper] [Project]
  7. (2024.6) 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations [Paper] [Project]
  8. (2024.7) Potential Based Diffusion Motion Planning [Paper] [Project]
  9. (2024.7) Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion [Paper] [Project]

Others

  1. (2023.12 & 2024.7) Any-point Trajectory Modeling for Policy Learning [Paper] [Project]