Skip to content

Latest commit

 

History

History
58 lines (43 loc) · 9.48 KB

README.md

File metadata and controls

58 lines (43 loc) · 9.48 KB

AAGCN

Abstract

Graph convolutional networks (GCNs), which generalize CNNs to more generic non-Euclidean structures, have achieved remarkable performance for skeleton-based action recognition. However, there still exist several issues in the previous GCN-based models. First, the topology of the graph is set heuristically and fixed over all the model layers and input data. This may not be suitable for the hierarchy of the GCN model and the diversity of the data in action recognition tasks. Second, the second-order information of the skeleton data, i.e., the length and orientation of the bones, is rarely investigated, which is naturally more informative and discriminative for the human action recognition. In this work, we propose a novel multi-stream attention-enhanced adaptive graph convolutional neural network (MS-AAGCN) for skeleton-based action recognition. The graph topology in our model can be either uniformly or individually learned based on the input data in an end-to-end manner. This data-driven approach increases the flexibility of the model for graph construction and brings more generality to adapt to various data samples. Besides, the proposed adaptive graph convolutional layer is further enhanced by a spatial-temporal-channel attention module, which helps the model pay more attention to important joints, frames and features. Moreover, the information of both the joints and bones, together with their motion information, are simultaneously modeled in a multi-stream framework, which shows notable improvement for the recognition accuracy. Extensive experiments on the two large-scale datasets, NTU-RGBD and Kinetics-Skeleton, demonstrate that the performance of our model exceeds the state-of-the-art with a significant margin.

Citation

@article{shi2020skeleton,
  title={Skeleton-based action recognition with multi-stream adaptive graph convolutional networks},
  author={Shi, Lei and Zhang, Yifan and Cheng, Jian and Lu, Hanqing},
  journal={IEEE Transactions on Image Processing},
  volume={29},
  pages={9532--9545},
  year={2020},
  publisher={IEEE}
}

Model Zoo

We release numerous checkpoints trained with various modalities, annotations on NTURGB+D and NTURGB+D 120. The accuracy of each modality links to the weight file.

Dataset Annotation GPUs Joint Top1 Bone Top1 Joint Motion Top1 Bone-Motion Top1 Two-Stream Top1 Four Stream Top1
NTURGB+D XSub Official 3D Skeleton 8 joint_config: 89.0 bone_config: 89.1 joint_motion_config: 86.9 bone_motion_config: 86.6 90.8 91.5
NTURGB+D XSub HRNet 2D Skeleton 8 joint_config: 89.7 bone_config: 92.2 joint_motion_config: 88.7 bone_motion_config: 88.8 92.8 93.0
NTURGB+D XView Official 3D Skeleton 8 joint_config: 95.7 bone_config: 95.2 joint_motion_config: 93.9 bone_motion_config: 92.4 96.4 96.7
NTURGB+D XView HRNet 2D Skeleton 8 joint_config: 97.1 bone_config: 96.8 joint_motion_config: 95.5 bone_motion_config: 95.9 97.8 98.2
NTURGB+D 120 XSub Official 3D Skeleton 8 joint_config: 82.8 bone_config: 84.7 joint_motion_config: 80.0 bone_motion_config: 80.2 86.3 86.9
NTURGB+D 120 XSub HRNet 2D Skeleton 8 joint_config: 80.2 bone_config: 84.2 joint_motion_config: 80.9 bone_motion_config: 81.1 84.7 85.5
NTURGB+D 120 XSet Official 3D Skeleton 8 joint_config: 84.8 bone_config: 86.2 joint_motion_config: 82.0 bone_motion_config: 82.8 88.1 88.8
NTURGB+D 120 XSet HRNet 2D Skeleton 8 joint_config: 86.3 bone_config: 88.2 joint_motion_config: 85.1 bone_motion_config: 85.1 89.1 89.9

Note

  1. We use the linear-scaling learning rate (Initial LR ∝ Batch Size). If you change the training batch size, remember to change the initial LR proportionally.
  2. For Two-Stream results, we adopt the 1 (Joint):1 (Bone) fusion. For Four-Stream results, we adopt the 2 (Joint):2 (Bone):1 (Joint Motion):1 (Bone Motion) fusion.

Training & Testing

You can use the following command to train a model.

bash tools/dist_train.sh ${CONFIG_FILE} ${NUM_GPUS} [optional arguments]
# For example: train AAGCN on NTURGB+D XSub (3D skeleton, Joint Modality) with 8 GPUs, with validation, and test the last and the best (with best validation metric) checkpoint.
bash tools/dist_train.sh configs/aagcn/aagcn_pyskl_ntu60_xsub_3dkp/j.py 8 --validate --test-last --test-best

You can use the following command to test a model.

bash tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${NUM_GPUS} [optional arguments]
# For example: test AAGCN on NTURGB+D XSub (3D skeleton, Joint Modality) with metrics `top_k_accuracy`, and dump the result to `result.pkl`.
bash tools/dist_test.sh configs/aagcn/aagcn_pyskl_ntu60_xsub_3dkp/j.py checkpoints/SOME_CHECKPOINT.pth 8 --eval top_k_accuracy --out result.pkl