Here, we provide a number of tracker models trained using PyTracking. We also report the results of the models on standard tracking datasets.
Model | VOT18 EAO (%) |
OTB100 AUC (%) |
NFS AUC (%) |
UAV123 AUC (%) |
LaSOT AUC (%) |
LaSOTExtSub AUC (%) |
TrackingNet AUC (%) |
GOT-10k AO (%) |
AVisT AUC (%) |
Links |
---|---|---|---|---|---|---|---|---|---|---|
ATOM | 0.401 | 66.3 | 58.4 | 64.2 | 51.5 | - | 70.3 | 55.6 | 38.6 | model |
DiMP-18 | 0.402 | 66.0 | 61.0 | 64.3 | 53.5 | - | 72.3 | 57.9 | 40.6 | model |
DiMP-50 | 0.440 | 68.4 | 61.9 | 65.3 | 56.9 | - | 74.0 | 61.1 | 41.9 | model |
PrDiMP-18 | 0.385 | 68.0 | 63.3 | 65.3 | 56.4 | - | 75.0 | 61.2 | 41.7 | model |
PrDiMP-50 | 0.442 | 69.6 | 63.5 | 68.0 | 59.8 | - | 75.8 | 63.4 | 43.3 | model |
SuperDimp | - | 70.1 | 64.8 | 67.7 | 63.1 | - | 78.1 | - | 48.4 | model |
SuperDiMPSimple | - | 70.5 | 64.4 | 68.2 | 63.5 | 43.7 | - | - | - | model |
KYS | 0.462 | 69.5 | 63.4 | - | 55.4 | - | 74.0 | 63.6 | 42.5 | model |
KeepTrack | - | 70.9 | 66.4 | 69.7 | 67.1 | 48.2 | - | - | 49.5 | model |
ToMP-50 | - | 70.1 | 66.9 | 69.0 | 67.6 | 45.4 | 81.2 | - | 51.6 | model |
ToMP-101 | - | 70.1 | 66.7 | 66.9 | 68.5 | 45.9 | 81.5 | - | 50.9 | model |
RTS | - | - | 65.4 | 67.9 | 69.7 | - | 81.6 | - | 50.8 | model |
TaMOs-50 | - | - | - | - | 67.9 | - | 82.7 | - | 51.5 | model |
TaMOs-SwinBase | - | - | - | - | 70.2 | - | 84.4 | - | 55.1 | model |
The raw results can be downloaded automatically using the download_results script.
You can also download and extract them manually from this link. The folder benchmark_results
contains raw results for all datasets except VOT. These results can be analyzed using the analysis module in pytracking. Check pytracking/notebooks/analyze_results.ipynb for examples on how to use the analysis module. The folder packed_results
contains packed results for TrackingNet and GOT-10k, which can be directly evaluated on the official evaluation servers, as well as the VOT results.
The raw results are in the format [top_left_x, top_left_y, width, height]. Due to the stochastic nature of the trackers, the results reported here are an average over multiple runs. For OTB-100, NFS, UAV123, LaSOT and LaSOTExtSub, the results were averaged over 5 runs. For VOT2018, 15 runs were used as per the VOT protocol. As TrackingNet results are obtained using the online evaluation server, only a single run was used for TrackingNet. For GOT-10k, 3 runs are used as per protocol.
The raw results on the AVisT benchmark for the trackers in this repository (see above) and some external trackers are available here.
Model | YouTube-VOS 2018 (Overall Score) | YouTube-VOS 2019 (Overall Score) | DAVIS 2017 val (J&F score) | Links |
---|---|---|---|---|
LWL_ytvos | 81.5 | 81.0 | -- | model |
LWL_boxinit | 70.4 | -- | 70.8 | model |
RTS | -- | 79.7 | 80.2 | model |
RTS (Box) | -- | 70.8 | 72.6 | model |
The raw segmentation results can be downloaded from here.