Skip to content
/ GraspXL Public

This is a repository for GraspXL, which can generate objective-drive grasping motions for 500k+ objects with different dexterous hands.

Notifications You must be signed in to change notification settings

zdchan/GraspXL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 

Repository files navigation

GraspXL: Generating Grasping Motions for Diverse Objects at Scale

Update

The large-scale generated motions for 500k+ objects, each with diverse objectives and currently MANO and Allegro hand models, are ready to download! If you are interested, just fill this form to get access!

The code will be released soon. We will also continuously enrich the dataset (e.g., motions generated with more hand models, more grasping motions generated with different objectives, etc) and keep you updated! Please also fill this form if you want to get the notification for any update!

Dataset

The dataset is composed of several .zip files, which contain the generated diverse grasping motion sequences for different hands on the Objaverse, and the processed (scaled and decimated) object mesh files. To make the dataset easier to download, we split the recorded motion sequences into several .zip files (5 sequences for most objects in each .zip file) so that users can choose which to download. The formats are like this :

Objects

object_dataset.zip
    ├── small
        ├── <object_id>
           ├── <object_id>.obj
        ...
    ├── medium
        ├── <object_id>
           ├── <object_id>.obj
        ...
    ├── large
        ├── <object_id>
           ├── <object_id>.obj
        ...

Small, medium, and large contain object meshes with different scales (Check our paper for more details) used by the recorded sequences.

Allegro sequences

allegro_dataset_1.zip
    ├── small
        ├── <object_id>
           ├── allegro_1.npy
           ├── allegro_2.npy
           ├── allegro_3.npy
            ...
        ...
    ├── medium
        ├── <object_id>
           ├── allegro_1.npy
            ...
        ...
    ├── large
        ├── <object_id>
           ├── allegro_1.npy
            ...
        ...

Not every object has the same amount of sequences recorded.

Each .npy file contains a single motion sequence with the following format:

data = np.load("allegro_x.npy", allow_pickle=True).item()
data['right_hand']['trans']: a numpy array with the shape (frame_num, 3), which is the position sequence of the wrist.
data['right_hand']['rot']: a numpy array with the shape (frame_num, 3), which is the orientation (in axis angle) sequence of the wrist.
data['right_hand']['pose']: a numpy array with the shape (frame_num, 22), where the first 6 dimensions of each frame are 0, and the remaining 16 dimensions are the joint angles.

data['object_id']['trans']: a numpy array with the shape (frame_num, 3), which is the position sequence of the object.
data['object_id']['rot']: a numpy array with the shape (frame_num, 3), which is the orientation (in axis angle) sequence of the object.
data['object_id']['angle']: not used.
allegro_dataset_2.zip

Same format as above. Another group of recorded motion sequences.

MANO sequences

mano_dataset_1.zip
    ├── small
        ├── <object_id>
           ├── mano_1.npy
           ├── mano_2.npy
           ├── mano_3.npy
            ...
        ...
    ├── medium
        ├── <object_id>
           ├── mano_1.npy
            ...
        ...
    ├── large
        ├── <object_id>
           ├── mano_1.npy
            ...
        ...

Not every object has the same amount of sequences recorded.

Each .npy file contains a single motion sequence with the following format:

data = np.load("mano_x.npy", allow_pickle=True).item()
data['right_hand']['trans']: a numpy array with the shape (frame_num, 3), which is the position sequence of the wrist.
data['right_hand']['rot']: a numpy array with the shape (frame_num, 3), which is the orientation (in axis angle) sequence of the wrist (the first 3 dimensions for MANO parameter).
data['right_hand']['pose']: a numpy array with the shape (frame_num, 45), which is the sequence of the remaining 45 dimensions of MANO parameter.

data['object_id']['trans']: a numpy array with the shape (frame_num, 3), which is the position sequence of the object.
data['object_id']['rot']: a numpy array with the shape (frame_num, 3), which is the orientation (in axis angle) sequence of the object.
data['object_id']['angle']: not used.
mano_dataset_2.zip
mano_dataset_3.zip

Same as above. Another group of recorded motion sequences.

BibTeX Citation

@inProceedings{zhang2024graspxl,
  title={{GraspXL}: Generating Grasping Motions for Diverse Objects at Scale},
  author={Zhang, Hui and Christen, Sammy and Fan, Zicong and Hilliges, Otmar and Song, Jie},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2024}
}

About

This is a repository for GraspXL, which can generate objective-drive grasping motions for 500k+ objects with different dexterous hands.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published