Skip to content

Automatic labelling of images and video for Computer Vision applications

License

Notifications You must be signed in to change notification settings

kochsebastian/AutoLabeling

 
 

Repository files navigation

AutoLabeling: open-source video labeler

This is a more advanced version of OpenLabeling. Improvements include a build in object detector to quickly label known objects, an advanced and more stable tracker for tracking labeled objects over a long time, and build-in automated track-id generation to keep track of individual objects.

Table of contents

Quick start

git clone --recurse-submodules git@github.com:kochsebastian/AutoLabeling.git
conda install --yes --file requirements.txt

Prerequisites

Detector:

The detector models are already provided in the EfficientDet submodule

export PYTHONPATH=$PYTHONPATH:object_detection/efficientdet/

Tracker:

Download siamrpn_r50_l234_dwxcorr model from Google Drive and put it into

object_detection/pysot/experiments/
export PYTHONPATH=$PYTHONPATH:object_detection/pysot/

Run labeling

Run manual mode

  1. Navigate to main/

  2. (Swap Detector and Tracker)

  3. (Edit class_list.txt)

  4. Run the code:

    python main.py [-h] [-i] [-o] [-t] [--tracker TRACKER_TYPE] [-n N_FRAMES] [--detector DETECTOR_TYPE]
    
    optional arguments:
     -h, --help                Show this help message and exit
     -i, --input               Path to images and videos input folder | Default: input/
     -o, --output              Path to output folder | Default: output/
     -t, --thickness           Bounding box and cross line thickness (int) | Default: -t 2
     --tracker tracker_type    tracker_type being used: ['SiamMask']
     --detector detector_type    detector_type being used: ['EfficentDet']
     -n N_FRAMES               number of frames to track 
    

Run auto mode

  1. Navigate to main/

  2. (Swap Detector and Tracker)

  3. (Edit class_list.txt)

  4. Run the code:

    python main.py [-h] [-i] [-o] [-t] [--tracker TRACKER_TYPE] [--detector DETECTOR_TYPE]
    
    optional arguments:
     -h, --help                Show this help message and exit
     -i, --input               Path to images and videos input folder | Default: input/
     -o, --output              Path to output folder | Default: output/
     -t, --thickness           Bounding box and cross line thickness (int) | Default: -t 2
     --tracker tracker_type    tracker_type being used: ['SiamMask']
     --detector detector_type    detector_type being used: ['EfficentDet']
    

Labeling Output

The tool will generate a output text file with the following structure:

frame_number, box_id, x, y, w, h, class_name

GUI usage

Keyboard, press:

Key Description
a/d previous/next image
s/w previous/next class
e edges
o make prediction with detector
h help
\space\ save
q save and quit

Video:

Key Description
p predict labels of the next frame
x stop tracking

Mouse:

  • To create a bounding box do a left click for the top left corner and do a left click for the bottom right corner
  • Use right click on object to quick delete
  • Use mouse wheel to zoom in and out
  • Use double click to select a bounding box
  • When bounding box selected:
    • Left click to toggle tracking
    • Drag with right mouse click to correct bounding box
    • use w/s to go through other classes
    • click middle mouse butten to change the box id
  • Use X to remove this and all the bounding boxes in the following frames with the same id

Integrate your own Detector and Tracker

The current detector is an EfficentDet trained on COCO for demonstration purposes. When labeling your own dataset you can either:

  • Train the EfficentDet on your dataset
  • Use your own Model

If you want use your own model you have to implement a Communicator class for your detector which implements the detect method. (See efficientdet.py for guidance) To use the detector the file name must be the class name in lowercase and the class name must be the argument name which you use in the script.
The same procedure must be followed for a custom tracker

Original Repo

João Cartucho

@INPROCEEDINGS{8594067,
  author={J. {Cartucho} and R. {Ventura} and M. {Veloso}},
  booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
  title={Robust Object Recognition Through Symbiotic Deep Learning In Mobile Robots}, 
  year={2018},
  pages={2336-2341},
}

About

Automatic labelling of images and video for Computer Vision applications

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%