Robust Real-time Object Detection for the Nao Robots
This Repo contains the code for our submission for the RoboCup 2019 Symposium. It allows you to train your own models, evaluate, or to use our pre-trained models. Some of the code is based on this excellent repo: https://github.com/eriklindernoren/PyTorch-YOLOv3
The code and dataset for ROBO-UNet and its variants is found here.
- PyTorch 1.0
- Progressbar
The datasets contain images in YUV format, using 512x384 resultion. For every image, the annotations are found in a .txt file of the same name (YOLO format). You can download the dataset used from the following links:
The pretrained models are available from here
- To train your own models, first extract the dataset to a folder of your choice.
- Regenerate the train and test image lists by running
python yoloFolder.py --root <path-to-the-YOLO-folder>
- Run the training on the synthetic database using
python train.py
- Finetune on the real database with
python train.py --finetune
You have several other options to use:
--bn
trains the ROBO-BN model
--yu
uses only 2 input channels (Y and (U+V)/2)
--hr
trains the ROBO-HR model
--transfer <N>
trains only the first N layers on the real database, fintunes the rest
Run python test.py
to evaluate the model, and python detect.py
to perform detection on the datasets. These scripts have the same input arguments as the train script.
You can run your models on the Nao robot using RoboDNN. You can export weights files using the paramSave.py script. The correcsponding config files are in the config subfolder.