Skip to content

Commit

Permalink
Bump version to v2.22.0
Browse files Browse the repository at this point in the history
Bump version to v2.22.0
  • Loading branch information
ZwwWayne authored Feb 26, 2022
2 parents e359d3f + 7b2b7fe commit 52a3276
Show file tree
Hide file tree
Showing 124 changed files with 6,706 additions and 369 deletions.
1 change: 1 addition & 0 deletions .dev_scripts/benchmark_inference_fps.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from mmcv import Config, DictAction
from mmcv.runner import init_dist
from terminaltables import GithubFlavoredMarkdownTable

from tools.analysis_tools.benchmark import repeat_measure_inference_speed


Expand Down
4 changes: 3 additions & 1 deletion .dev_scripts/gather_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,9 @@ def get_dataset_name(config):
LVISV05Dataset='LVIS v0.5',
LVISV1Dataset='LVIS v1',
VOCDataset='Pascal VOC',
WIDERFaceDataset='WIDER Face')
WIDERFaceDataset='WIDER Face',
OpenImagesDataset='OpenImagesDataset',
OpenImagesChallengeDataset='OpenImagesChallengeDataset')
cfg = mmcv.Config.fromfile('./configs/' + config)
return name_map[cfg.dataset_type]

Expand Down
78 changes: 66 additions & 12 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,23 +33,26 @@ jobs:
strategy:
matrix:
python-version: [3.7]
torch: [1.5.1, 1.6.0, 1.7.0, 1.8.0, 1.9.0]
torch: [1.5.1, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.1]
include:
- torch: 1.5.1
torchvision: 0.6.1
mmcv: 1.5.0
mmcv: 1.5
- torch: 1.6.0
torchvision: 0.7.0
mmcv: 1.6.0
mmcv: 1.6
- torch: 1.7.0
torchvision: 0.8.1
mmcv: 1.7.0
mmcv: 1.7
- torch: 1.8.0
torchvision: 0.9.0
mmcv: 1.8.0
mmcv: 1.8
- torch: 1.9.0
torchvision: 0.10.0
mmcv: 1.9.0
mmcv: 1.9
- torch: 1.10.1
torchvision: 0.11.2
mmcv: 1.10
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
Expand Down Expand Up @@ -91,19 +94,19 @@ jobs:
- torch: 1.5.1+cu101
torch_version: torch1.5.1
torchvision: 0.6.1+cu101
mmcv: 1.5.0
mmcv: 1.5
- torch: 1.6.0+cu101
torch_version: torch1.6.0
torchvision: 0.7.0+cu101
mmcv: 1.6.0
mmcv: 1.6
- torch: 1.7.0+cu101
torch_version: torch1.7.0
torchvision: 0.8.1+cu101
mmcv: 1.7.0
mmcv: 1.7
- torch: 1.8.0+cu101
torch_version: torch1.8.0
torchvision: 0.9.0+cu101
mmcv: 1.8.0
mmcv: 1.8

steps:
- uses: actions/checkout@v2
Expand Down Expand Up @@ -160,12 +163,16 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8, 3.9]
torch: [1.9.0+cu102]
torch: [1.9.0+cu102, 1.10.1+cu102]
include:
- torch: 1.9.0+cu102
torch_version: torch1.9.0
torchvision: 0.10.0+cu102
mmcv: 1.9.0
mmcv: 1.9
- torch: 1.10.1+cu102
torch_version: torch1.10.1
torchvision: 0.11.2+cu102
mmcv: 1.10

steps:
- uses: actions/checkout@v2
Expand Down Expand Up @@ -224,3 +231,50 @@ jobs:
env_vars: OS,PYTHON
name: codecov-umbrella
fail_ci_if_error: false

build_windows:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [windows-2022]
python: [3.8]
platform: [cpu, cu111]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python }}
- name: Upgrade pip
run: pip install pip --upgrade --user
- name: Install PyTorch
# As a complement to Linux CI, we test on PyTorch LTS version
run: pip install torch==1.8.2+${{ matrix.platform }} torchvision==0.9.2+${{ matrix.platform }} -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
- name: Install MMCV
run: pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cpu/torch1.8/index.html --only-binary mmcv-full
- name: Install unittest dependencies
run: |
python -V
python -m pip install pycocotools
python -m pip install -r requirements/tests.txt -r requirements/optional.txt
python -m pip install albumentations>=0.3.2 --no-binary imgaug,albumentations
python -m pip install git+https://github.com/cocodataset/panopticapi.git
python -c 'import mmcv; print(mmcv.__version__)'
- name: Show pip list
run: pip list
- name: Build and install
run: pip install -e .
- name: Run unittests
run: coverage run --branch --source mmdet -m pytest tests -sv
- name: Generate coverage report
run: |
coverage xml
coverage report -m
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2
with:
file: ./coverage.xml
flags: unittests
env_vars: OS,PYTHON
name: codecov-umbrella
fail_ci_if_error: false
14 changes: 14 additions & 0 deletions .owners.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
assign:
strategy:
# random
daily-shift-based
scedule:
'*/1 * * * *'
assignees:
- Czm369
- hhaAndroid
- jbwang1997
- RangiLyu
- BIGWangYuDong
- chhluo
- ZwwWayne
8 changes: 2 additions & 6 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,8 @@ repos:
rev: 3.8.3
hooks:
- id: flake8
- repo: https://github.com/asottile/seed-isort-config
rev: v2.2.0
hooks:
- id: seed-isort-config
- repo: https://github.com/timothycrosley/isort
rev: 4.3.21
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:
- id: isort
- repo: https://github.com/pre-commit/mirrors-yapf
Expand Down
13 changes: 9 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,10 +74,11 @@ This project is released under the [Apache 2.0 license](LICENSE).

## Changelog

**2.21.0** was released in 8/2/2022:
**2.22.0** was released in 24/2/2022:

- Support CPU training
- Allow to set parameters about multi-processing to speed up training and testing
- Support [MaskFormer](configs/maskformer), [DyHead](configs/dyhead), [OpenImages Dataset](configs/openimages) and [TIMM backbone](configs/timm_example)
- Support visualization for Panoptic Segmentation
- Release a good recipe of using ResNet in object detectors pre-trained by [ResNet Strikes Back](https://arxiv.org/abs/2110.00476), which consistently brings about 3~4 mAP improvements over RetinaNet, Faster/Mask/Cascade Mask R-CNN

Please refer to [changelog.md](docs/en/changelog.md) for details and release history.

Expand Down Expand Up @@ -162,6 +163,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
<td>
<ul>
<li><a href="configs/panoptic_fpn">Panoptic FPN (CVPR'2019)</a></li>
<li><a href="configs/maskformer">MaskFormer (NeurIPS'2019)</a></li>
</ul>
</td>
<td>
Expand Down Expand Up @@ -225,6 +227,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
<li><a href="configs/pvt">PVT (ICCV'2021)</a></li>
<li><a href="configs/swin">Swin (CVPR'2021)</a></li>
<li><a href="configs/pvt">PVTv2 (ArXiv'2021)</a></li>
<li><a href="configs/resnet_strikes_back">ResNet strikes back (ArXiv'2021)</a></li>
</ul>
</td>
<td>
Expand All @@ -234,6 +237,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
<li><a href="configs/carafe">CARAFE (ICCV'2019)</a></li>
<li><a href="configs/fpg">FPG (ArXiv'2020)</a></li>
<li><a href="configs/groie">GRoIE (ICPR'2020)</a></li>
<li><a href="configs/dyhead">DyHead (CVPR'2021)</a></li>
</ul>
</td>
<td>
Expand All @@ -252,6 +256,7 @@ Results and models are available in the [model zoo](docs/en/model_zoo.md).
<li><a href="configs/gn+ws">Weight Standardization (ArXiv'2019)</a></li>
<li><a href="configs/pisa">Prime Sample Attention (CVPR'2020)</a></li>
<li><a href="configs/strong_baselines">Strong Baselines (CVPR'2021)</a></li>
<li><a href="configs/resnet_strikes_back">Resnet strikes back (ArXiv'2021)</a></li>
</ul>
</td>
</tr>
Expand Down Expand Up @@ -321,4 +326,4 @@ If you use this toolbox or benchmark in your research, please cite this project.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab Model Deployment Framework.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
10 changes: 7 additions & 3 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,10 +73,11 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope

## 更新日志

最新的 **2.21.0** 版本已经在 2022.02.08 发布:
最新的 **2.22.0** 版本已经在 2022.02.24 发布:

- 支持了 CPU 训练
- 允许设置多进程相关的参数来加速训练与推理
- 支持 [MaskFormer](configs/maskformer)[DyHead](configs/dyhead)[OpenImages Dataset](configs/openimages)[TIMM backbone](configs/timm_example)
- 支持全景分割可视化
- 发布了一个在目标检测任务中使用 ResNet 的好方法,它是由 [ResNet Strikes Back](https://arxiv.org/abs/2110.00476) 预训练的,并且能稳定的在 RetinaNet, Faster/Mask/Cascade Mask R-CNN 上带来约 3-4 mAP 的提升

如果想了解更多版本更新细节和历史信息,请阅读[更新日志](docs/changelog.md)

Expand Down Expand Up @@ -224,6 +225,7 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope
<li><a href="configs/pvt">PVT (ICCV'2021)</a></li>
<li><a href="configs/swin">Swin (CVPR'2021)</a></li>
<li><a href="configs/pvt">PVTv2 (ArXiv'2021)</a></li>
<li><a href="configs/resnet_strikes_back">ResNet strikes back (ArXiv'2021)</a></li>
</ul>
</td>
<td>
Expand All @@ -233,6 +235,7 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope
<li><a href="configs/carafe">CARAFE (ICCV'2019)</a></li>
<li><a href="configs/fpg">FPG (ArXiv'2020)</a></li>
<li><a href="configs/groie">GRoIE (ICPR'2020)</a></li>
<li><a href="configs/dyhead">DyHead (CVPR'2021)</a></li>
</ul>
</td>
<td>
Expand All @@ -251,6 +254,7 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope
<li><a href="configs/gn+ws">Weight Standardization (ArXiv'2019)</a></li>
<li><a href="configs/pisa">Prime Sample Attention (CVPR'2020)</a></li>
<li><a href="configs/strong_baselines">Strong Baselines (CVPR'2021)</a></li>
<li><a href="configs/resnet_strikes_back">Resnet strikes back (ArXiv'2021)</a></li>
</ul>
</td>
</tr>
Expand Down
65 changes: 65 additions & 0 deletions configs/_base_/datasets/openimages_detection.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# dataset settings
dataset_type = 'OpenImagesDataset'
data_root = 'data/OpenImages/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True, denorm_bbox=True),
dict(type='Resize', img_scale=(1024, 800), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1024, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
],
),
]
data = dict(
samples_per_gpu=2,
workers_per_gpu=0, # workers_per_gpu > 0 may occur out of memory
train=dict(
type=dataset_type,
ann_file=data_root + 'annotations/oidv6-train-annotations-bbox.csv',
img_prefix=data_root + 'OpenImages/train/',
label_file=data_root + 'annotations/class-descriptions-boxable.csv',
hierarchy_file=data_root +
'annotations/bbox_labels_600_hierarchy.json',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=data_root + 'annotations/validation-annotations-bbox.csv',
img_prefix=data_root + 'OpenImages/validation/',
label_file=data_root + 'annotations/class-descriptions-boxable.csv',
hierarchy_file=data_root +
'annotations/bbox_labels_600_hierarchy.json',
meta_file=data_root + 'annotations/validation-image-metas.pkl',
image_level_ann_file=data_root +
'annotations/validation-annotations-human-imagelabels-boxable.csv',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'annotations/validation-annotations-bbox.csv',
img_prefix=data_root + 'OpenImages/validation/',
label_file=data_root + 'annotations/class-descriptions-boxable.csv',
hierarchy_file=data_root +
'annotations/bbox_labels_600_hierarchy.json',
meta_file=data_root + 'annotations/validation-image-metas.pkl',
image_level_ann_file=data_root +
'annotations/validation-annotations-human-imagelabels-boxable.csv',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric='mAP')
46 changes: 46 additions & 0 deletions configs/dyhead/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# DyHead

> [Dynamic Head: Unifying Object Detection Heads with Attentions](https://arxiv.org/abs/2106.08322)
<!-- [ALGORITHM] -->

## Abstract

The complex nature of combining localization and classification in object detection has resulted in the flourished development of methods. Previous works tried to improve the performance in various object detection heads but failed to present a unified view. In this paper, we present a novel dynamic head framework to unify object detection heads with attentions. By coherently combining multiple self-attention mechanisms between feature levels for scale-awareness, among spatial locations for spatial-awareness, and within output channels for task-awareness, the proposed approach significantly improves the representation ability of object detection heads without any computational overhead. Further experiments demonstrate that the effectiveness and efficiency of the proposed dynamic head on the COCO benchmark. With a standard ResNeXt-101-DCN backbone, we largely improve the performance over popular object detectors and achieve a new state-of-the-art at 54.0 AP. Furthermore, with latest transformer backbone and extra data, we can push current best COCO result to a new record at 60.6 AP.

<div align=center>
<img src="https://user-images.githubusercontent.com/42844407/149169448-fcafb6d0-b866-41cc-9422-94de9f1e1761.png" height="300"/>
</div>

## Results and Models

| Method | Backbone | Style | Setting | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
|:------:|:--------:|:-------:|:------------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
| ATSS | R-50 | caffe | reproduction | 1x | 5.4 | 13.2 | 42.5 | [config](./atss_r50_caffe_fpn_dyhead_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/dyhead/atss_r50_fpn_dyhead_for_reproduction_1x_coco/atss_r50_fpn_dyhead_for_reproduction_4x4_1x_coco_20220107_213939-162888e6.pth) &#124; [log](https://download.openmmlab.com/mmdetection/v2.0/dyhead/atss_r50_fpn_dyhead_for_reproduction_1x_coco/atss_r50_fpn_dyhead_for_reproduction_4x4_1x_coco_20220107_213939.log.json) |
| ATSS | R-50 | pytorch | simple | 1x | 4.9 | 13.7 | 43.3 | [config](./atss_r50_fpn_dyhead_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/dyhead/atss_r50_fpn_dyhead_4x4_1x_coco/atss_r50_fpn_dyhead_4x4_1x_coco_20211219_023314-eaa620c6.pth) &#124; [log](https://download.openmmlab.com/mmdetection/v2.0/dyhead/atss_r50_fpn_dyhead_4x4_1x_coco/atss_r50_fpn_dyhead_4x4_1x_coco_20211219_023314.log.json) |

- We trained the above models with 4 GPUs and 4 `samples_per_gpu`.
- The `reproduction` setting aims to reproduce the official implementation based on Detectron2.
- The `simple` setting serves as a minimum example to use DyHead in MMDetection. Specifically,
- it adds `DyHead` to `neck` after `FPN`
- it sets `stacked_convs=0` to `bbox_head`
- The `simple` setting achieves higher AP than the original implementation.
We have not conduct ablation study between the two settings.
`dict(type='Pad', size_divisor=128)` may further improve AP by prefer spatial alignment across pyramid levels, although large padding reduces efficiency.

## Relation to Other Methods

- DyHead can be regarded as an improved [SEPC](https://arxiv.org/abs/2005.03101) with [DyReLU modules](https://arxiv.org/abs/2003.10027) and simplified [SE blocks](https://arxiv.org/abs/1709.01507).
- Xiyang Dai et al., the author team of DyHead, adopt it for [Dynamic DETR](https://openaccess.thecvf.com/content/ICCV2021/html/Dai_Dynamic_DETR_End-to-End_Object_Detection_With_Dynamic_Attention_ICCV_2021_paper.html).
The description of Dynamic Encoder in Sec. 3.2 will help you understand DyHead.

## Citation

```latex
@inproceedings{DyHead_CVPR2021,
author = {Dai, Xiyang and Chen, Yinpeng and Xiao, Bin and Chen, Dongdong and Liu, Mengchen and Yuan, Lu and Zhang, Lei},
title = {Dynamic Head: Unifying Object Detection Heads With Attentions},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2021}
}
```
Loading

0 comments on commit 52a3276

Please sign in to comment.