Skip to content

Latest commit

 

History

History
183 lines (128 loc) · 8.45 KB

README.md

File metadata and controls

183 lines (128 loc) · 8.45 KB

Update

Recently, I optimize my YOLO project:

https://github.com/yjh0410/PyTorch_YOLO-Family

On the COCO-val:

Backbone Size FPS AP AP50 AP75 APs APm APl GFLOPs Params
YOLO-Nano ShuffleNetv2-1.0x 512 21.6 40.0 20.5 7.4 22.7 32.3 1.65 1.86M
YOLO-Tiny CSPDarkNet-Tiny 512 26.6 46.1 26.9 13.5 30.0 35.0 5.52 7.66M
YOLOv1 ResNet50 640 35.2 54.7 37.1 14.3 39.5 53.4 41.96 44.54M
YOLOv2 ResNet50 640 36.3 56.6 37.7 15.1 41.1 54.0 42.10 44.89M
YOLOv3 DarkNet53 640 38.7 60.2 40.7 21.3 41.7 51.7 76.41 57.25M
YOLOv4 CSPDarkNet53 640 40.5 60.4 43.5 24.2 44.8 52.0 60.55 52.00M

YOLO-Nano

A new version YOLO-Nano inspired by NanoDet.

In this project, you can enjoy:

  • a different version of YOLO-Nano

Network

This is a a different of YOLO-Nano built by PyTorch:

  • Backbone: ShuffleNet-v2
  • Neck: a very lightweight FPN+PAN

Train

  • Batchsize: 32
  • Base lr: 1e-3
  • Max epoch: 120
  • LRstep: 60, 90
  • optimizer: SGD

The overview of my YOLO-Nano Image

Experiment

Environment:

  • Python3.6, opencv-python, PyTorch1.1.0, CUDA10.0,cudnn7.5
  • For training: Intel i9-9940k, RTX-2080ti

VOC:

YOLO-Nano-1.0x:

size mAP
VOC07 test 320 65.0
VOC07 test 416 69.1
VOC07 test 608 70.8

COCO:

size AP AP50 AP75 AP_S AP_M AP_L
COCO eval 320 17.2 33.1 16.2 2.6 16.0 31.7
COCO eval 416 19.6 36.9 18.6 4.6 19.1 33.3
COCO eval 608 20.6 38.6 19.5 7.0 22.5 30.7

YOLO-Nano-0.5x:

hold on ...

Visualization

On COCO-val

The overview of my YOLO-Nano Image Image Image Image Image Image Image Image Image Image Image Image Image Image

Installation

  • Pytorch-gpu 1.1.0/1.2.0/1.3.0
  • Tensorboard 1.14.
  • opencv-python, python3.6/3.7

Dataset

VOC Dataset

I copy the download files from the following excellent project: https://github.com/amdegroot/ssd.pytorch

I have uploaded the VOC2007 and VOC2012 to BaiDuYunDisk, so for researchers in China, you can download them from BaiDuYunDisk:

Link:https://pan.baidu.com/s/1tYPGCYGyC0wjpC97H-zzMQ

Password:4la9

You will get a VOCdevkit.zip, then what you need to do is just to unzip it and put it into data/. After that, the whole path to VOC dataset is data/VOCdevkit/VOC2007 and data/VOCdevkit/VOC2012.

Download VOC2007 trainval & test

# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2007.sh # <directory>

Download VOC2012 trainval

# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2012.sh # <directory>

MSCOCO Dataset

I copy the download files from the following excellent project: https://github.com/DeNA/PyTorch_YOLOv3

Download MSCOCO 2017 dataset

Just run sh data/scripts/COCO2017.sh. You will get COCO train2017, val2017, test2017.

Train

VOC

python train.py -d voc --cuda -v [select a model] -ms

You can run python train.py -h to check all optional argument.

COCO

python train.py -d coco --cuda -v [select a model] -ms

Test

VOC

python test.py -d voc --cuda -v [select a model] --trained_model [ Please input the path to model dir. ]

COCO

python test.py -d coco-val --cuda -v [select a model] --trained_model [ Please input the path to model dir. ]

Evaluation

VOC

python eval.py -d voc --cuda -v [select a model] --train_model [ Please input the path to model dir. ]

COCO

To run on COCO_val:

python eval.py -d coco-val --cuda -v [select a model] --train_model [ Please input the path to model dir. ]

To run on COCO_test-dev(You must be sure that you have downloaded test2017):

python eval.py -d coco-test --cuda -v [select a model] --train_model [ Please input the path to model dir. ]

You will get a .json file which can be evaluated on COCO test server.