This is the official github repository related to the MECCANO Dataset.
MECCANO is a multimodal dataset of egocentric videos to study humans behavior understanding in industrial-like settings. The multimodality is characterized by the presence of gaze signals, depth maps and RGB videos acquired simultaneously with a custom headset. You can download the MECCANO dataset and its annotations from the project web page.
To use the MECCANO Dataset in PySlowfast please follow the instructions below:
- Install PySlowFast following the official instructions;
- Download the PySlowFast_files folder from this repository;
- Place the files "init.py", "meccano.py" and "sampling.py" in your slowfast/datasets/ folder;
- Place the files "init.py", "custom_video_model_builder_MECCANO_gaze.py" in your slowfast/models/ folder (to use the gaze signal).
Now, run the training/test with:
python tools/run_net.py --cfg path_to_your_config_file --[optional flags]
We provide pre-extracted features of MECCANO Dataset:
- RGB features extracted with SlowFast: [
coming soon
]
To use the MECCANO Dataset in Detectron2 to perform Object Detection and Recognition please follow the instructions below:
- Install Detectron2:
pip install -U torch torchvision cython pip install -U 'git+https://github.com/facebookresearch/fvcore.git' 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' git clone https://github.com/facebookresearch/detectron2 detectron2_repo pip install -e detectron2_repo # You can find more details at https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md
- Register the MECCANO Dataset adding the following instructions in detectron2_repo/tools/run_net.py, in the main() function:
register_coco_instances("Meccano_objects_train", {}, "/path_to_your_folder/instances_meccano_train.json", "/path_to_the_MECCANO_active_object_annotations_frames/") register_coco_instances("Meccano_objects_val", {}, "/path_to_your_folder/instances_meccano_val.json", "/path_to_the_MECCANO_active_object_annotations_frames/") register_coco_instances("Meccano_objects_test", {}, "/path_to_your_folder/instances_meccano_test.json","/path_to_the_MECCANO_active_object_annotations_frames/")
Now, run the training/test with:
python tools/train_net.py --config-file path_to_your_config_file --[optional flags]
To use the MECCANO Dataset with RULSTM please follow the instructions below:
- Download source code of RULSTM following the official instructions;
- Download the RULSTM_files folder from this repository;
- Place the files "meccano_dataset.py" and "main.py" in your RULSTM main folder;
- Download the .csv files with the action annotation of MECCANO dataset from here;
- Download the Pre-Extracted features of MECCANO dataset from here.
Now, run the test with:
python main.py test ../../test_features/ models/meccano/final_fusion_model.pt --modality fusion --task anticipation --num_class 61 --img_tmpl {:05d}.jpg --meccanomulti
We provided pretrained models on the MECCANO Dataset for the action recognition task (only for the first version of the dataset):
architecture | depth | model | config |
---|---|---|---|
I3D | R50 | link |
configs/action_recognition/I3D_8x8_R50.yaml |
SlowFast | R50 | link |
configs/action_recognition/SLOWFAST_8x8_R50.yaml |
We provided pretrained models on the MECCANO Multimodal Dataset for the action recognition task:
architecture | depth | modality | model | config |
---|---|---|---|---|
SlowFast | R50 | RGB | link |
configs/action_recognition/SLOWFAST_8x8_R50_MECCANO.yaml |
SlowFast | R50 | Depth | link |
configs/action_recognition/SLOWFAST_8x8_R50_MECCANO.yaml |
We provided pretrained models on the MECCANO Dataset for the active object recognition task:
architecture | depth | model | config |
---|---|---|---|
Faster RCNN | R101_FPN | link |
configs/active_object_recognition/meccano_active_objects.yaml |
For the active objects detection involved in the interaction, you have to use the model provided for the 2) task.
We provided pretrained models on the MECCANO Multimodal Dataset for the verb prediction of the EHOI detection task:
architecture | depth | modality | model | config |
---|---|---|---|---|
SlowFast | R50 | RGB | link |
configs/ehoi_detection/SLOWFAST_8x8_R50_MECCANO_ehoi.yaml |
SlowFast | R50 | Depth | link |
configs/ehoi_detection/SLOWFAST_8x8_R50_MECCANO_ehoi.yaml |
We provided the best model trained on MECCANO Multimodal Dataset which uses three branches: Objects, Gaze and Hands.
architecture | modality | model |
---|---|---|
RULSTM | Obj, Gaze, Hands | link |
We provided pretrained models on the MECCANO Dataset for the next-active object prediction task:
architecture | depth | train_data | model | config |
---|---|---|---|---|
Faster RCNN | R101_FPN | active+next-active | link |
configs/next-active_object/meccano_next_active_objects.yaml |
If you find the MECCANO Dataset useful in your research, please use the following BibTeX entry for citation.
@misc{ragusa2022meccano,
title={MECCANO: A Multimodal Egocentric Dataset for Humans Behavior Understanding in the Industrial-like Domain},
author={Francesco Ragusa and Antonino Furnari and Giovanni Maria Farinella},
year={2022},
eprint={2209.08691},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Additionally, cite the original paper:
@inproceedings{ragusa2021meccano,
title = {The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain},
author = {Francesco Ragusa and Antonino Furnari and Salvatore Livatino and Giovanni Maria Farinella},
year = {2021},
eprint = {2010.05654},
booktitle = {IEEE Winter Conference on Application of Computer Vision (WACV)}
}