Skip to content

A Scale adaptive CNN based hierarchical object tracker.

License

Notifications You must be signed in to change notification settings

frietz58/se_hiob

Repository files navigation

HIOB

The HIrarchical OBject tracker HIOB is a framework written in python and tensorflow. It uses a combination of offline trained CNNs for visual feature extraction and online trained CNNs to build a model of the tracked object.

Initially, HIOB has been created by Peer Springstübe (as his diploma thesis) at the Department of Informatics of the Universität Hamburg in the research group Knowledge Technology (WTM). This version of HIOB can be found here. Tobias Knüppler has further improved and HIOB's performance has been adapted to HIOB to run inside of ROS. The ROS integration of HIOB lives in a separate repository.

The CNN based tracking algorithm in HIOB is inspired by the FCNT by Wang et al presented in their ICCV 2015 paper. The program code of HIOB is completely independet from the FCNT and has been written by us.

In the scope of my bachelor thesis, I made HIOB scale adaptive, which had been identified to be one of HIOB's weak points. Thus, I've build upon and greatly benefitted from the work that has previously been done on this framework! I won't go into depth explaining the problem, if you are interested in this, I will glady email you a copy (or you can rend a hardcover version of my thesis at the informatics library at the uni hamburg).

Installation

Using HIOB

(hiob_venv)  python hiob_gui.py -e config/environment.yaml -t config/tracker.yaml

clone the repositiory

$ git clone https://github.com/frietz58/se_hiob

virtual environment

HIOB needs python3 and tensorflow. We recommend building a virtual environment for HIOB. Build a virtual environment somewhere outside of the HIOB directory and activate it:

$ virtualenv -ppython3 hiob_env
$ source hiob_env/bin/activate

Installing CUDA

In order to run the gpu version, cuda needs to be installed on the machine. In order to install cuda and cudnn, perform the following actions:

  1. Install cuda with your method of choice from here (or older versions)
    Theoretically, Tensorflow >= 1.11 should recogniue CUDA 10.0, but in my case it didn't hence I installed cuda 9.0 which, even though not officially suported, runs on Ubuntu 18.04.
    I had the best experience installing cuda via the deb file, but every method should work. Make sure to apt-get --purge remove any previous installations, as this can be a bit tricky though, especially if you want to install the custom graphics driver, i highly encourage anyone to read the official liunx installation guide.

  2. Install cudnn from here, you have to register a nvidia developer account in the process. Follow the installation instructions from here for a smooth installation, for me the installation via tar file worked great.

  3. Add cudnn to the virtualenv path. Maybe this was just buggy for me, but after successful installation of cuda 9.0 and cudnn, tensorflow would not find my cudnn installation. Therefor go to your virtualenv python installation and add the following line to your activate file in /path_to_venv/bin/activate, right under the export PATH statement

export PYTHONPATH="/usr/local/cuda-9.0/lib64"
export PYTHONPATH="/usr/local/cuda/lib64"

dependencies

Install required packages:

# for using your GPU and CUDA
(hiob_env) $ cd HIOB
(hiob_env) $ pip install -r requirements.txt

This installs a tensorflow build that requires a NVIDIA GPU and the CUDA machine learning library. You can alternatively use a tensorflow build that only uses the CPU. It should work, but it will not be fast. We supply a diffenrent requirements.txt for that:

# alternatively for using your CPU only:
(hiob_env) $ cd HIOB
(hiob_env) $ pip install -r requirements_cpu.txt

Run the demo

HIOB comes with a simple demo script, that downloads a tracking sequence (~4.3MB) and starts the tracker on it. Inside your virtual environment and inside the HIOB directory, just run:

(hiob_env) $ ./run_demo.sh

If all goes well, the sample will be downloaded to HIOB/demo/data/tb100/Deer.zip and a window will open that shows the tracking process. A yellow rectangle will show the position predicted by HIOB and a dark green frame will show the ground truth included in the sample sequence. A log of the tracking process will be created inside HIOB/demo/hiob_logs containing log output and an analysis of the process.

Getting more test samples

The tb100 online tracking benchmark

The deer example used in the demo is taken from the tb100 online benchmark by Yi Wu and Jongwoo Lim and Ming-Hsuan Yang. The benchmark consists of 98 picture sequences with a total of 100 tracking sequences. It can be found under http://visual-tracking.net/ HIOB can read work directly on the zip files provided there. The benchmark has been released in a paper: http://faculty.ucmerced.edu/mhyang/papers/cvpr13_benchmark.pdf

Since the 98 sequences must be downloaded individually from a very slow server, the process is quite time consuming. HIOB comes with a script that can handle the download for you, it is located at bin/hiob_downloader within this repository. If you call it with argument tb100 it will download the whole dataset from the server. This will most likely take several hours.

The Princeton RGBD tracking benchmark

HIOB also works with the Princeton Tracking Benchmark and is able to read the files provided there. That benchmark provides depth information along with the RGB information, but the depth is not used by HIOB. Be advised that of the 100 sequences provided only 95 contain a ground truth. The original implementation of HIOB has been evaluated by the benchmark on April 2017, the results can be seen on the evaluation page named hiob_lc2.

About

A Scale adaptive CNN based hierarchical object tracker.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published