Zhenjia Xu1,
Cheng Chi1,
Benjamin Burchfiel2,
Eric Cousineau2,
Siyuan Feng2,
Shuran Song1
1Columbia University, 2Toyota Research Institute
RSS 2022
Project Page | Video | arXiv
This repository contains code for training and evaluating DextAIRity in both simulation and real-world settings on a three-UR5 robot arm setup for Ubuntu 18.04.
- 1 Simulation
- 2 Real World
This section walks you through setting up the CUDA accelerated cloth simulation environment. To start, install conda, docker and nvidia-docker.
We have prepared a conda YAML file which contains all the python dependencies.
conda env create -f environment.yml
This codebases uses a CUDA accelerated cloth simulator which can load any arbitrary mesh to train a cloth unfolding policy. The simulator is a fork of PyFlex from FlingBot, and requires a NVIDIA GPU to run. We have provided a Dockerfile in this repo for compiling and using this simulation environment for training.
docker build -t dextairity .
To launch the docker container, go to this repo's root directory, then run
export DEXTAIRITY_PATH=${PWD}
nvidia-docker run \
-v $DEXTAIRITY_PATH:/workspace/DextAIRity \
-v /path/to/your/anaconda3:/path/to/your/anaconda3 \
--gpus all --shm-size=64gb -d -e DISPLAY=$DISPLAY -e QT_X11_NO_MITSHM=1 -it dextairity --name DextAIRity
You might need to change --shm-size
appropriately for your system.
Add conda to PATH, then activate DextAIRity
export PATH=/path/to/your/anaconda3/bin:$PATH
. activate DextAIRity
Then, at the root of this repo inside the docker container, compile the simulator with
. ./prepare.sh
. ./compile.sh
The compilation will result in a .so
shared object file. ./prepare.sh
sets the environment variables needed for compilation and also tells the python interpreter to look into the build directory containing the compiled .so
file.
Finally, you should be able to run the testing code outside of docker.
. ./prepare.sh
python test_sim.py
If you can see a cloth inside the GUI, it means the simulatior is set correctly. Otherwise, you can check out FlingBot's repository, Softgym's Docker guide, and Daniel Seita's blog post on installing PyFlex with Docker for more information.
In the repo's root, download and unzip pretrained models and task configurations。
wget dextairity.cs.columbia.edu/download/pretrained_models.zip
wget dextairity.cs.columbia.edu/download/data.zip
unzip pretrained_models.zip
unzip data.zip
As discribed in the paper, our model is evaluated on four tasks.
- Large Rect / X-Large Rect contain rectangular cloths with at least one side larger than the reach range. Edge lengths are uniformly sampled from 0.4m to 0.75m for Large, 0.8m to 0.95m for X-Large.
- Shirts / Dresses contain a subset of shirt and dress meshes from the CLOTH3D dataset.
Configurations of each task are stored in the data
folder.
To evaluate DextAIRity on one of the evaluation tasks, run the following command:
python test_cloth_unfolding.py \
--grasp_checkpoint pretrained_models/DextAIRity_grasp.pth \
--blow_checkpoint pretrained_models/DextAIRity_blow.pth \
--task TASK
Please repalce TASK
with one of the following takss: Test_Large_Rect
, Test_XLarge_Rect
, Test_Shirt
, Test_Dress
. Evaluation result and visualization will be stored in the exp
folder.
Note that the performance of grasping policy and blowing policy are highly coupled – grasping score is dependent on the following blowing steps, and blowing performance is affected by how the cloth is grasped. This coupling can make training unstable. To solve this issue, we designed simple heuristic policies for grasping and blowing to allow independent pre-training for each module before combining them for further fine-tuning. The heuristic blowing policy is to place the blower in the middle of the workspace facing forward. Because this heuristic policy can unfold the cloth somewhat, it provides a reasonable starting place from which to bootstrap training. The heuristic grasping policy uniformly samples 100 grasping position pairs on the cloth and selects the pair with the largest distance.
# grasping policy pre-training
python train_cloth_unfolding.py \
--policy DextAIRity_fixed
# blowing policy pre-training
python train_cloth_unfolding.py \
--policy DextAIRity_random_grasp
# fine-tuning
python train_cloth_unfolding.py \
--policy DextAIRity \
--grasp_checkpoint DextAIRity_fixed \
--blow_checkpoint DextAIRity_random_grasp
Our real world system uses 3 UR5 arms, one equipped with an OnRobot RG2 gripper, one with a Schunk WSG50, and one with an air pump. We modify the Schunk WSG50 finger tip with a rubber tip for better cloth pinch grasping. We use 2 RGB-D cameras, an Azure Kinect v3 for the top down camera and an Intel Realsense D415 as front camera in the cloth unfolding task or side camera in the bag opening task.
- Calibrate ur5 arms and the table via
real_world/calibration_robot.py
andcalibration_table.py
. - Setup cloth size and bag size in
real_world/cloth_env.py
andreal_world/bag_env.py
. - (Cloth unfolding) Setup stretching primitive variables in
real_world/cloth_env.py
. Please check out FlingBot's repository for more details. - (Bag opening) Setup cropping size and color filter parameters in
real_world/bag_env.py
.
Prepare checkpoints first. You can either download the prtrained models or training from scratch in simulation. Then, run the following evaluation command:
python test_cloth_unfolding_real.py \
--grasp_checkpoint pretrained_models/DextAIRity_grasp.pth \
--blow_checkpoint pretrained_models/DextAIRity_blow.pth
First, collect data in real world using a random policy.
python collect_bag_data.py
Then train a classification model on this dataset.
python train_bag_opening.py
To evaluate the model in the real world:
python test_bag_opening.py
@inproceedings{xu2022dextairity,
title={DextAIRity: Deformable Manipulation Can be a Breeze},
author={Xu, Zhenjia and Chi, Cheng and Burchfiel, Benjamin and Cousineau, Eric and Feng, Siyuan and Song, Shuran},
booktitle={Proceedings of Robotics: Science and Systems (RSS)},
year={2022}
}
This repository is released under the MIT license. See LICENSE for additional details.
- The codebase is built on top of FlingBot.
- The code for DeepLab-V3-Plus is modified from pytorch-deeplab-xception.