The AFFGA-Net is a high performance network which predicts the quality and pose of grasps at every pixel in an input RGB image.
This repository contains the data set used to train AFFGA-Net and the program for labeling the grasp model.
High-performance Pixel-level Grasp Detection based on Adaptive Grasping and Grasp-aware Network
Dexin Wang, Chunsheng Liu, Faliang Chang, Nanjun Li, and Guangxin Li
This paper has been accepted by IEEE Trans. Ind. Electron.
This code was developed with Python 3.6 on Ubuntu 16.04. The main Python requirements:
pytorch==1.2 or higher version
opencv-python
mmcv
numpy
-
Download and extract Cornell and Clutter Dataset.
-
run
generate_grasp_mat.py
,convertpcd*Label.txt
topcd*grasp.mat
, they represent the same label, but the format is different, which is convenient for AFFGA-Net to read. -
Put all the samples of the Cornell and Clutter datasets in the same folder, and put train-test folder in the upper directory of the dataset, as follows
D:\path_to_dataset\ ├─cornell_clutter │ ├─pcd0100grasp.mat │ └─pcd0100r.png │ | | └─pcd2000grasp.mat | └─pcd2000r.png | ├─train-test │ ├─train-test-all │ ├─train-test-cornell │ └─train-test-mutil │ └─train-test-single | ├─other_files
Some example pre-trained models for AFFGA-Net can be downloaded from here.
The model is trained on the Cornell and Clutter dataset using the RGB images.
The zip file contains the full saved model from torch.save(model, path)
.
Training is done by the train_net.py
script.
Some basic examples:
python train_net.py --dataset-path <Path To Dataset>
Trained models are saved in output/models
by default, with the validation score appended.
visualisation of the trained networks are done using the demo.py
script.
Modify the path of the pre-trained model before running, i.e. model
.
Some output examples of AFFGA-Net is under the demo\output
.
future work