A new MFTIQ tracker has been published, check it out!
Official implementation of the MFT tracker from the paper:
Michal Neoral, Jonáš Šerých, Jiří Matas: "MFT: Long-Term Tracking of Every Pixel", WACV 2024
Please cite our paper, if you use any of this.
@inproceedings{neoral2024mft,
title={{MFT}: Long-term tracking of every pixel},
author={Neoral, Michal and Serych, Jonas and Matas, Jiri},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={6837--6847},
year={2024},
}
Create and activate a new virtualenv:
# we have tested with python 3.7.4
python -m venv venv
source venv/bin/activate
Then install all the dependencies:
pip install torch numpy einops tqdm opencv-python scipy Pillow==9 matplotlib ipdb
Simply running:
python demo.py
should produce a demo_out
directory with two visualizations.
See train.org
The demo video in demo_in
was extracted from youtube.
This work is licensed under the Attribution-NonCommercial-ShareAlike 4.0 International license.
The MFT/RAFT
directory contains a modified version of RAFT, which is licensed under BSD-3-Clause license.
Our modifications (OcclusionAndUncertaintyBlock
and its integration in raft.py
) are licensed again under the Attribution-NonCommercial-ShareAlike 4.0 International.
This work was supported by Toyota Motor Europe,
by the Grant Agency of the Czech Technical University in Prague, grant No. SGS23/173/OHK3/3T/13
, and
by the Research Center for Informatics project CZ.02.1.01/0.0/0.0/16_019/0000765
funded by OP VVV.