Skip to content

Latest commit

 

History

History
194 lines (162 loc) · 6.59 KB

README.md

File metadata and controls

194 lines (162 loc) · 6.59 KB

English | 简体中文

Introduction

Welcome to AIRDet! AIRDet is an efficiency-oriented anchor-free object detector, aims to enable robust object detection in various industry scene. With simple design, AIRDet-s outperforms series competitor e.g.(YOLOX-s, MT-YOLOv6-s, PP-YOLOE-s), and still maintains fast speed. Moreover, here you can find not only powerful models, but also highly efficient training strategies and complete tools from training to deployment.

Updates

  • [2022/06/23: We release AIRDet-0.0.1!]
    • Release AIRDet-series object detection models, e.g. AIRDet-s and AIRDet-m. AIRDet-s achievs mAP as 44.2% on COCO val dataset and 2.8ms latency on Nvidia-V100. AIRDet-m is a larger model build upon AIRDet-s in a heavy neck paradigm, which achieves robust improvement in detection of different object scales. For more information, please refer to Giraffe-neck.
    • Release model convert tools for esay deployment, surppots onnx and tensorRT-fp32, TensorRT-fp16.

Comming soon

  • High efficient backbone.
  • AIRDet-tiny and AIRDet-nano.
  • Model distillation.

Model Zoo

Model size mAPval
0.5:0.95
Latency V100
TRT-FP32-BS32
Latency V100
TRT-FP16-BS32
FLOPs
(G)
weights
Yolox-s 640 40.5 3.4 2.3 26.81 link
AIRDet-s 640 44.2 4.4 2.8 27.56 link
AIRDet-m 640 48.2 8.3 4.4 76.61 link
  • We report the mAP of models on COCO2017 validation set.
  • The latency in this table are measured without post-processing.

Quick Start

Installation

Step1. Install AIRDet.

git clone https://github.com/tinyvision/AIRDet.git
cd AIRDet/
conda create -n AIRDet python=3.7 -y
conda activate AIRDet
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt
export PYTHONPATH=$PWD:$PYTHONPATH

Step2. Install pycocotools.

pip3 install cython; 
pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
Demo

Step1. Download a pretrained model from the benchmark table, e.g. airdet-s.

Step2. Use -f(config filename) to specify your detector's config. For example:

python tools/demo.py -f configs/airdet_s.py --ckpt /path/to/your/airdet_s.pth --path assets/dog.jpg
Reproduce our results on COCO

Step1. Prepare COCO dataset

cd <AIRDet Home>
ln -s /path/to/your/coco ./datasets/coco

Step 2. Reproduce our results on COCO by specifying -f(config filename)

python -m torch.distributed.launch --nproc_per_node=8 tools/train.py -f configs/airdet_s.py
Evaluation
python -m torch.distributed.launch --nproc_per_node=8 tools/eval.py -f configs/airdet_s.py --ckpt /path/to/your/airdet_s_ckpt.pth
Training on Custom Data Airdet supports COCO and VOC format. Before training, you need to transform your data into COCO or VOC format. We provide the usage of COCO format in default config. If you are trying to use VOC format, here is a breif example.

Step.1 Transform your own dataset into VOC format. the directory structure should be as follow:

Bus/
    Annotations/
        *.xml
    JPEGImages/
        *.jpg,png,PNG
    ImageSets/
        Main/
            train.txt
            test.txt
            val.txt

Step.2 Write the corresponding dataset name and Train/Eval dataset path, the dataset name should be like [xxx_custom_train/val].

self.dataset.train_ann = ("bus_custom_train",)
self.dataset.val_ann = ("bus_custom_val")
self.dataset.data_dir = 'datasets'
self.dataset.data_list = {
    "bus_custom_train": {
        "data_dir": "Bus/",
        "split": "train"
    },
    "bus_custom_val": {
        "data_dir": "Bus/",
        "split": "val"
    }, 
}

self.dataset.class2id = {
    'class_name1': 1,
    'class_name2': 2,
    'class_name3': 3,
}
self.model.head.num_classes = len(self.dataset.class2id.keys())

Step.3 Put your dataset under $AIRDet/datasets.

ln -s /path/to/your/Bus/ ./datasets/Bus/

Step.4 Create your config file to control everything, including model setting, training setting, and test setting, e.g. bus_s.py.

python -m torch.distributed.launch --nproc_per_node=8 tools/train.py -f configs/bus_s.py

Deploy

Installation

Step1. Install ONNX.

pip install onnx==1.8.1
pip install onnxruntime==1.8.0
pip install onnx-simplifier==0.3.5

Step2. Install CUDA、CuDNN、TensorRT and pyCUDA 2.1 CUDA

wget https://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda_10.2.89_440.33.01_linux.run
sudo sh cuda_10.2.89_440.33.01_linux.run
export PATH=$PATH:/usr/local/cuda-10.2/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.2/lib64
source ~/.bashrc

2.2 CuDNN

sudo cp cuda/include/* /usr/local/cuda/include/
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64/
sudo chmod a+r /usr/local/cuda/include/cudnn.h
sudo chmod a+r /usr/local/cuda/lib64/libcudnn*

2.3 TensorRT

cd TensorRT-7.2.1.6/python
pip install tensorrt-7.2.1.6-cp37-none-linux_x86_64.whl
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:TensorRT-7.2.1.6/lib

2.4 pycuda

pip install pycuda==2022.1

Step.1 convert torch model to onnx or trt engine, and the output file would be generated in deploy/. Note the convert mode has three options:[onnx, trt_32, trt_16].

python tools/converter.py --output-name deploy/airdet_s.onnx -f configs/airdet_s.py -c airdet_s.pth --batch_size 1 --img_size 640 --mode trt_32

Step.2 trt engine evaluation and inference speed computation and appoint trt engine by --trt.

python -m torch.distributed.launch --nproc_per_node=1 tools/trt_eval.py -f configs/airdet_s.py --trt deploy/airdet_s_32.trt --batch_size 1 --img_size 640

Step.3 trt engine inference demo and appoint test image by -p.

python tools/trt_inference.py -f configs/airdet_s.py -t deploy/airdet_s_32.trt -p assets/dog.jpg --img_size 640 --nms 0.7