[Project Page] [Paper]
Music-driven choreography is a challenging problem with a wide variety of industrial applications. Recently, many methods have been proposed to synthesize dance motions from music for a single dancer. However, generating dance motion for a group remains an open problem. In this paper, we present GDANCE, a new large-scale dataset for music-driven group dance generation. Unlike existing datasets that only support single dance, our new dataset contains group dance videos, hence supporting the study of group choreography. We propose a semi-autonomous labeling method with humans in the loop to obtain the 3D ground truth for our dataset. The proposed dataset consists of 16.7 hours of paired music and 3D motion from in-the-wild videos, covering 7 dance styles and 16 music genres. We show that naively applying single dance generation technique to creating group dance motion may lead to unsatisfactory results, such as inconsistent movements and collisions between dancers. Based on our new dataset, we propose a new method that takes an input music sequence and a set of 3D positions of dancers to efficiently produce multiple group-coherent choreographies. We propose new evaluation metrics for measuring group dance quality and perform intensive experiments to demonstrate the effectiveness of our method. Our code and dataset will be released to facilitate future research on group dance generation.
[Download] The dataset can be downloaded at Data
[Updated] The music and dance labels are now available at Labels
The data directory is organized as follows:
- split_sequence_names.txt:
- a txt file containing seperate sequence names in the data (each sequence should have unique name or id)
- musics:
- contains raw music .wav file of each sequence with the corresponding name. The music frames are aligned with the motion frames.
- motions_smpl:
- contains the motion file of each sequence with the corresponding name, the motion is provided in .pkl file format.
- Each data dictionary mainly includes the following items:
'smpl_poses': shape[num_persons x num_frames x 72]
: the motions contain 72-D vector pose sequences in SMPL pose format (24 joints).'root_trans': shape[num_persons x num_frames x 3]
: sequences of root translation.
Here is an example python script to read the motion file
import pickle
import numpy as np
data = pickle.load(open("sequence_name.pkl","rb"))
print(data.keys())
smpl_poses = data['smpl_poses']
smpl_trans = data['root_trans']
# ... may utilize the pose by using SMPL forward function: https://github.com/vchoutas/smplx
We provide demo code for loading and visualizing the motions.
First, you need to download the SMPL model (v1.0.0) and rename the model files for visualization. The directory structure of the data is expected to be:
The directory structure of the data is expected to be:
<DATA_DIR>
├── motions_smpl/
├── musics/
└── split_sequence_names.txt
<SMPL_DIR>
├── SMPL_MALE.pkl
└── SMPL_FEMALE.pkl
Then run this to install the necessary packages
pip install scipy torch smplx chumpy vedo trimesh
pip install numpy==1.23.0
The following command will first calculate the SMPL joint locations (joint rotations and root translation) and then plot on the 3D figure in realtime.
python vis_smpl_kpt.py \
--data_dir <DATA_DIR>/motions_smpl \
--smpl_path <SMPL_DIR>/SMPL_FEMALE.PKL \
--sequence_name sequence_name.pkl
The following command will calculate the SMPL meshes and visualize in 3D.
python vis_smpl_mesh.py \
--data_dir <DATA_DIR>/motions_smpl \
--smpl_path <SMPL_DIR>/SMPL_FEMALE.PKL \
--sequence_name sequence_name.pkl
-
Dataset - Baseline model & training: TBD
If you use this code as part of any published research, we'd really appreciate it if you could cite the following paper:
@inproceedings{aiozGdance,
author = {Le, Nhat and Pham, Thang and Do, Tuong and Tjiputra, Erman and Tran, Quang D. and Nguyen, Anh},
title = {Music-Driven Group Choreography},
journal = {CVPR},
year = {2023},
}
We also make a further step toward improving the performance of the group dance generation model in (https://github.com/aioz-ai/GCD). If you find that the solution is useful, you could cite the following paper:
@article{le2023controllable,
title={Controllable Group Choreography Using Contrastive Diffusion},
author={Le, Nhat and Do, Tuong and Do, Khoa and Nguyen, Hien and Tjiputra, Erman and Tran, Quang D and Nguyen, Anh},
journal={ACM Transactions on Graphics (TOG)},
volume={42},
number={6},
pages={1--14},
year={2023},
publisher={ACM New York, NY, USA}
}
Software Copyright License for non-commercial scientific research purposes. Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use AIOZ-GDANCE data, model and software, (the "Data & Software"), including 3D meshes, images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.
This repo used visualization code from AIST++