Skip to content

Latest commit

 

History

History
73 lines (60 loc) · 3.19 KB

README.md

File metadata and controls

73 lines (60 loc) · 3.19 KB

py-MDNet

原仓库地址:地址

by Hyeonseob Nam and Bohyung Han at POSTECH

Update (September, 2020)

  • Migration to python 3.6 & pyTorch 1.1
  • Efficiency improvement (~5fps)
  • ImagNet-VID pretraining
  • Code refactoring
  • modify some bug in Origin code,made them suitable for general graphics cards. For example, my graphics card is RTX 1060, and the tracking results are attached to the result folder
  • add some note in code

Introduction

PyTorch implementation of MDNet, which runs at ~5fps with a single CPU core and a single GPU (GTX 1060).

If you're using this code for your research, please cite:

@InProceedings{nam2016mdnet,
author = {Nam, Hyeonseob and Han, Bohyung},
title = {Learning Multi-Domain Convolutional Neural Networks for Visual Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}

Results on OTB

部分跟踪结果如下图所示:

Prerequisites

  • python 3.6+
  • opencv 3.0+
  • PyTorch 1.0+ and its dependencies
  • for GPU support: a GPU with ~3G memory

Usage

Tracking

 python tracking/run_tracker.py -s DragonBaby [-d \(display fig\)] [-f \(save fig\)]
 or 
 python tracking/run_tracker.py -s DragonBaby -d -f 
  • You can provide a sequence configuration in two ways (see tracking/gen_config.py):
    • python tracking/run_tracker.py -s [seq name]
    • python tracking/run_tracker.py -j [json path]

Pretraining

  • Download VGG-M (matconvnet model) and save as "models/imagenet-vgg-m.mat"
  • Pretraining on VOT-OTB
    • Download VOT datasets into "datasets/VOT/vot201x"
     python pretrain/prepro_vot.py
     python pretrain/train_mdnet.py -d vot
  • Pretraining on ImageNet-VID
     python pretrain/prepro_imagenet.py
     python pretrain/train_mdnet.py -d imagenet