rectangle-graspnet-multiObject-multiGrasp is a modified version of grasp_multiObject_multiGrasp by fujenchu. We have made some adjustment to the original code in order to apply it to the graspnet dataset.
The code of this repo is mainly based on grasp_multiObject_multiGrasp.
- Clone the code
git clone https://github.com/graspnet/rectangle-graspnet-multiObject-multiGrasp
cd rectangle-graspnet-multiObject-multiGrasp/grasp_multiObject_multiGrasp
- Prepare environment (Need Anaconda or Miniconda)
conda env create -f grasp_env.yaml
conda activate grasp
- Build Cython modules
cd lib
make clean
make
cd ..
- Install Python COCO API
cd data
git clone https://github.com/pdollar/coco.git
cd coco/PythonAPI
make
cd ../../../..
mkdir graspnet_dataset
Then download the graspnet dataset from https://graspnet.net/datasets.html
-
Move the dataset to ./graspnet_dataset
-
Or you can link the path of the graspnet dataset to ./graspnet_dataset by
ln -s /path/to/graspnet ./graspnet_dataset
-
Or you can modify GRASPNET_ROOT in grasp_multiObject_multiGrasp/tools/graspnet_config.py directly
NOTICE: Your path should match the following structure details
graspnet_dataset
|-- scenes
|-- scene_0000
| |-- object_id_list.txt
| |-- rs_wrt_kn.npy
| |-- kinect
| | |-- rgb
| | | |-- 0000.png to 0255.png
| | `-- depth
| | | |-- 0000.png to 0255.png
| | `-- label
| | | |-- 0000.png to 0255.png
| | `-- annotations
| | | |-- 0000.xml to 0255.xml
| | `-- meta
| | | |-- 0000.mat to 0255.mat
| | `-- rect
| | | |-- 0000.npy to 0255.npy
| | `-- camK.npy
| | `-- camera_poses.npy
| | `-- cam0_wrt_table.npy
| |
| `-- realsense
| |-- same structure as kinect
|
|
`-- scene_0001
|
`-- ... ...
|
`-- scene_0189
-
Download pretrained models
- Download the model from Google Drive, or JBOX, or Baidu Pan (Password: v9j7)
- Move it to grasp_multiObject_multiGrasp/output/res50/train/default/
-
Run demo
cd grasp_multiObject_multiGrasp/tools python demo_graspRGD.py --net res50 --dataset grasp cd ../..
- Choose the type of the camera by changing CAMERA_NAME(line 2) in grasp_multiObject_multiGrasp/tools/graspnet_config.py
- Run data processing script
cd data_process/script
python data_preprocessing.py
cd ..
- Move the processed data
mv grasp_data ../grasp_multiObject_multiGrasp/
cd ..
-
Download the
res50
pretrained model-
Download the model from Google Drive, or JBOX, or Baidu Pan (Password: tl84)
-
Move the
res50.ckpt
file to grasp_multiObject_multiGrasp/data/imagenet_weights/
-
-
If you have stored the pretrained models in grasp_multiObject_multiGrasp/output/res50/train/default/
-
Make sure there's nothing in grasp_multiObject_multiGrasp/output/res50/train/default/
-
You can rename the directory. For example:
mv grasp_multiObject_multiGrasp/output/res50 grasp_multiObject_multiGrasp/output/res50_pretrained
-
Or you can move the directory grasp_multiObject_multiGrasp/output/res50/ to somewhere else
-
-
Training
cd grasp_multiObject_multiGrasp
./experiments/scripts/train_faster_rcnn.sh 0 graspRGB res50