Skip to content

DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects, CVPR 2023

License

Notifications You must be signed in to change notification settings

dexsuite/dexart-release

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects

DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects,

Chen Bao*, Helin Xu*, Yuzhe Qin, Xiaolong Wang, CVPR 2023.

DexArt is a novel benchmark and pipeline for learning multiple dexterous manipulation tasks. This repo contains the simulated environment and training code for DexArt.

DexArt Teaser

Installation

  1. Clone the repo and Create a conda env with all the Python dependencies.
git clone git@github.com:Kami-code/dexart-release.git
cd dexart-release
conda create --name dexart python=3.8
conda activate dexart
pip install -e .    # for simulation environment
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 -c pytorch    # for visualizing trained policy and training 
  1. Download the assets from the Google Drive and place the asset directory at the project root directory.

File Structure

The file structure is listed as follows:

dexart/env/: environments

assets/: tasks annotations, object and robot URDFs

examples/: example code to try DexArt

stable_baselines3/: rl training code modified from stable_baselines3

Quick Start

Example of Random Action

python examples/random_action.py --task_name=laptop

task_name: name of environment [faucet, laptop, bucket, toilet]

Example for Visualizing Point Cloud Observation

python examples/visualize_observation.py --task_name=laptop

task_name: name of environment [faucet, laptop, bucket, toilet]

Example for Visualizing Policy

python examples/visualize_policy.py --task_name=laptop --checkpoint_path assets/rl_checkpoints/laptop.zip

task_name: name of environment [faucet, laptop, bucket, toilet]

use_test_set: flag to determine evaluating with seen or unseen instances

Example for Evaluating Policy

python examples/evaluate_policy.py --task_name=laptop --checkpoint_path assets/rl_checkpoints/laptop.zip --eval_per_instance 10

task_name: name of environment [faucet, laptop, bucket, toilet]

use_test_set: flag to determine evaluating with seen or unseen instances

Example for Training RL Agent

python3 examples/train.py --n 100 --workers 10 --iter 5000 --lr 0.0001 &&
--seed 100 --bs 500 --task_name laptop --extractor_name smallpn &&
--pretrain_path ./assets/vision_pretrain/laptop_smallpn_fulldata.pth 

n: the number of rollouts to be collected in single episode

workers: the number of simulation progress

iter: the total episode number to be trained

lr: learning rate of RL

seed: seed of RL

bs: batch size of RL update

task_name: name of training environment [faucet, laptop, bucket, toilet]

extractor_name: different PointNet architectures [smallpn, meduimpn, largepn]

pretrain_path: path to downloaded pretrained model. [Default: None]

Bibtex

@inproceedings{
    bao2023dexart,
    title={DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects},
    author={Chen Bao and Helin Xu and Yuzhe Qin and Xiaolong Wang},
    booktitle={Conference on Computer Vision and Pattern Recognition 2023},
    year={2023},
    url={https://openreview.net/forum?id=v-KQONFyeKp}
}

Acknowledgements

This repository employs the same code structure for simulation environment and training code to that used in DexPoint.

About

DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects, CVPR 2023

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%