Skip to content

WesleyHsieh0806/TAO-Amodal

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TAO-Amodal

Official Repository of Tracking Any Object Amodally.

📙 Project Page | 📎 Paper Link | ✏️ Citations

TAO-Amodal

📌 Leave a ⭐ to keep track of our updates.


Table of Contents


🎒 Get Started

Clone the repository

git clone https://github.com/WesleyHsieh0806/TAO-Amodal.git 

Setup environment

conda create --name TAO-Amodal python=3.9 -y
conda activate TAO-Amodal
bash environment_setup.sh

📚 Prepare Dataset

  1. Download our dataset following the instructions here.
  2. The directory should have the following structure:
    TAO-Amodal
     ├── frames
     │    └── train
     │       ├── ArgoVerse
     │       ├── BDD
     │       ├── Charades
     │       ├── HACS
     │       ├── LaSOT
     │       └── YFCC100M
     ├── amodal_annotations
     │    ├── train/validation/test.json
     │    ├── train_lvis_v1.json
     │    └── validation_lvis_v1.json
     ├── example_output
     │    └── prediction.json
     ├── BURST_annotations
     │    ├── train
     │         └── train_visibility.json
     │    ...

Explore more examples from our dataset here.

🧑‍🎨 Visualization

Visualize our dataset and tracker predictions to get a better understanding of amodal tracking. Instructions could be found here.

TAO-Amodal

🏃 Training and Inference

We provide the training and inference code of the proposed Amodal Expander.

The inference code generates a lvis_instances_results.json, which could be used to obtain the evaluation results as introduced in the next section.

📊 Evaluation

  1. Output tracker predictions as json. The predictions should be structured as:
[{
    "image_id" : int,
    "category_id" : int,
    "bbox" : [x,y,width,height],
    "score" : float,
    "track_id": int,
    "video_id": int
}]

We also provided an example output prediction json here. Refer to this file to check the correct format.

  1. Evaluate on TAO-Amodal
cd tools
python eval_on_tao_amodal.py --track_result /path/to/prediction.json \
                             --output_log   /path/to/output.log \
                             --annotation   /path/to/validation_lvis_v1.json

Annotation JSON is provided in our dataset. Evaluation results will be written in your console and saved in --output_log.

Citations

@article{hsieh2023tracking,
  title={Tracking any object amodally},
  author={Hsieh, Cheng-Yen and Khurana, Tarasha and Dave, Achal and Ramanan, Deva},
  journal={arXiv preprint arXiv:2312.12433},
  year={2023}
}