Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
jameslahm committed Sep 28, 2023
1 parent 8ef1a71 commit b749e1f
Show file tree
Hide file tree
Showing 43 changed files with 9,317 additions and 3,615 deletions.
25 changes: 14 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Ao Wang, Hui Chen, Zijia Lin, Hengjun Pu, and Guiguang Ding\
<summary>
<font size="+1">Abstract</font>
</summary>
Recently, lightweight Vision Transformers (ViTs) demonstrate superior performance and lower latency compared with lightweight Convolutional Neural Networks (CNNs) on resource-constrained mobile devices. This improvement is usually attributed to the multi-head self-attention module, which enables the model to learn global representations. However, the architectural disparities between lightweight ViTs and lightweight CNNs have not been adequately examined. In this study, we revisit the efficient design of lightweight CNNs and emphasize their potential for mobile devices. We incrementally enhance the mobile-friendliness of a standard lightweight CNN, specifically MobileNetV3, by integrating the efficient architectural choices of lightweight ViTs. This ends up with a new family of pure lightweight CNNs, namely RepViT. Extensive experiments show that RepViT outperforms existing state-of-the-art lightweight ViTs and exhibits favorable latency in various vision tasks. On ImageNet, RepViT achieves over 80\% top-1 accuracy with nearly 1ms latency on an iPhone 12, which is the first time for a lightweight model, to the best of our knowledge. Our largest model, RepViT-M3, obtains 81.4\% accuracy with only 1.3ms latency.
Recently, lightweight Vision Transformers (ViTs) demonstrate superior performance and lower latency compared with lightweight Convolutional Neural Networks (CNNs) on resource-constrained mobile devices. This improvement is usually attributed to the multi-head self-attention module, which enables the model to learn global representations. However, the architectural disparities between lightweight ViTs and lightweight CNNs have not been adequately examined. In this study, we revisit the efficient design of lightweight CNNs and emphasize their potential for mobile devices. We incrementally enhance the mobile-friendliness of a standard lightweight CNN, specifically MobileNetV3, by integrating the efficient architectural choices of lightweight ViTs. This ends up with a new family of pure lightweight CNNs, namely RepViT. Extensive experiments show that RepViT outperforms existing state-of-the-art lightweight ViTs and exhibits favorable latency in various vision tasks. On ImageNet, RepViT achieves over 80\% top-1 accuracy with 1ms latency on an iPhone 12, which is the first time for a lightweight model, to the best of our knowledge. Our largest model, RepViT-M2.3, obtains 83.7\% accuracy with only 2.3ms latency.
</details>

<br/>
Expand All @@ -29,18 +29,21 @@ Recently, lightweight Vision Transformers (ViTs) demonstrate superior performanc

### Models

| Model | Top-1 (300)| #params | MACs | Latency | Ckpt | Core ML | Log |
| Model | Top-1 (300 / 450)| #params | MACs | Latency | Ckpt | Core ML | Log |
|:---------------|:----:|:---:|:--:|:--:|:--:|:--:|:--:|
| RepViT-M1 | 78.5 | 5.1M | 0.8G | 0.9ms | [M1](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m1_distill_300.pth) | [M1](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m1_224.mlmodel) | [M1](./logs/repvit_m1_train.log) |
| RepViT-M2 | 80.6 | 8.2M | 1.3G | 1.1ms | [M2](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m2_distill_300.pth) | [M2](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m2_224.mlmodel) | [M2](./logs/repvit_m2_train.log) |
| RepViT-M3 | 81.4 | 10.1M | 1.9G | 1.3ms | [M3](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m3_distill_300.pth) | [M3](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m3_224.mlmodel) | [M3](./logs/repvit_m3_train.log) |
| RepViT-M0.9 | 78.7 / 79.1 | 5.1M | 0.8G | 0.9ms | [M0.9-300e]() / [M0.9-450e]() | [M0.9-300e]() / [M0.9-450e]() | [M0.9-300e](./logs/repvit_m0_9_distill_300e.txt) / [M0.9-450e](./logs/repvit_m0_9_distill_450e.txt) |
| RepViT-M1.0 | 80.0 / 80.3 | 6.8M | 1.1G | 1.0ms | [M1.0-300e]() / [M1.0-450e]() | [M1.0-300e]() / [M1.0-450e]() | [M1.0-300e](./logs/repvit_m1_0_distill_300e.txt) / [M1.0-450e](./logs/repvit_m1_0_distill_450e.txt) |
| RepViT-M1.1 | 80.7 / 81.1 | 8.2M | 1.3G | 1.1ms | [M1.1-300e]() / [M1.1-450e]() | [M1.1-300e]() / [M1.1-450e]() | [M1.1-300e](./logs/repvit_m1_1_distill_300e.txt) / [M1.1-450e](./logs/repvit_m1_1_distill_450e.txt) |
| RepViT-M1.5 | 82.3 / 82.5 | 14.0M | 2.3G | 1.5ms | [M1.5-300e]() / [M1.5-450e]() | [M1.5-300e]() / [M1.5-450e]() | [M1.5-300e](./logs/repvit_m1_5_distill_300e.txt) / [M1.5-450e](./logs/repvit_m1_5_distill_450e.txt) |
| RepViT-M2.3 | 83.3 / 83.7 | 22.9M | 4.5G | 2.3ms | [M2.3-300e]() / [M2.3-450e]() | [M2.3-300e]() / [M2.3-450e]() | [M2.3-300e](./logs/repvit_m2_3_distill_300e.txt) / [M2.3-450e](./logs/repvit_m2_3_distill_450e.txt) |


Tips: Convert a training-time RepViT into the inference-time structure
```
from timm.models import create_model
import utils
model = create_model('repvit_m1')
model = create_model('repvit_m0_9')
utils.replace_batchnorm(model)
```

Expand All @@ -49,15 +52,15 @@ utils.replace_batchnorm(model)
The latency reported in RepViT for iPhone 12 (iOS 16) uses the benchmark tool from [XCode 14](https://developer.apple.com/videos/play/wwdc2022/10027/).
For example, here is a latency measurement of RepViT-M1:

![](./figures/repvit_m1_latency.png)
![](./figures/repvit_m0_9_latency.png)

Tips: export the model to Core ML model
```
python export_coreml.py --model repvit_m1 --ckpt pretrain/repvit_m1_distill_300.pth
python export_coreml.py --model repvit_m0_9 --ckpt pretrain/repvit_m0_9_distill_300e.pth
```
Tips: measure the throughput on GPU
```
python speed_gpu.py --model repvit_m1
python speed_gpu.py --model repvit_m0_9
```


Expand All @@ -83,14 +86,14 @@ Download and extract ImageNet train and val images from http://image-net.org/. T
To train RepViT-M1 on an 8-GPU machine:

```
python -m torch.distributed.launch --nproc_per_node=8 --master_port 12346 --use_env main.py --model repvit_m1 --data-path ~/imagenet --dist-eval
python -m torch.distributed.launch --nproc_per_node=8 --master_port 12346 --use_env main.py --model repvit_m0_9 --data-path ~/imagenet --dist-eval
```
Tips: specify your data path and model name!

### Testing
For example, to test RepViT-M1:
```
python main.py --eval --model repvit_m3 --resume pretrain/repvit_m3_distill_300.pth --data-path ~/imagenet
python main.py --eval --model repvit_m0_9 --resume pretrain/repvit_m0_9_distill_300e.pth --data-path ~/imagenet
```

## Downstream Tasks
Expand Down
9 changes: 5 additions & 4 deletions detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,9 @@ Detection and instance segmentation on MS COCO 2017 is implemented based on [MMD
## Models
| Model | $AP^b$ | $AP_{50}^b$ | $AP_{75}^b$ | $AP^m$ | $AP_{50}^m$ | $AP_{75}^m$ | Latency | Ckpt | Log |
|:---------------|:----:|:---:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|
| RepViT-M2 | 39.8 | 61.9 | 43.5 | 37.2 | 58.8 | 40.1 | 4.9ms | [M2](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m2_coco.pth) | [M2](./logs/repvit_m2_coco.json) |
| RepViT-M3 | 41.1 | 63.1 | 45.0 | 38.3 | 60.4 | 41.0 | 5.9ms | [M3](https://github.com/jameslahm/RepViT/releases/download/v1.0/repvit_m3_coco.pth) | [M3](./logs/repvit_m3_coco.json) |
| RepViT-M1_1 | 39.8 | 61.9 | 43.5 | 37.2 | 58.8 | 40.1 | 4.9ms | [M1_1]() | [M1_1](./logs/repvit_m1_1_coco.json) |
| RepViT-M1_5 | 41.6 | 63.2 | 45.3 | 38.6 | 60.5 | 41.5 | 43.6 | 6.4ms | [M1_5]() | [M1_5](./logs/repvit_m1_5_coco.json) |
| RepViT-M2_3 | 44.6 | 66.1 | 48.8 | 40.8 | 63.6 | 43.9 | 46.1 | 9.9ms | [M2_3]() | [M2_3](./logs/repvit_m2_3_coco.json) |

## Installation

Expand Down Expand Up @@ -43,15 +44,15 @@ We provide a multi-GPU testing script, specify config file, checkpoint, and numb
For example, to test RepViT-M1 on COCO 2017 on an 8-GPU machine,

```
./dist_test.sh configs/mask_rcnn_repvit_m2_fpn_1x_coco.py path/to/repvit_m2_coco.pth 8 --eval bbox segm
./dist_test.sh configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py path/to/repvit_m1_1_coco.pth 8 --eval bbox segm
```

## Training
Download ImageNet-1K pretrained weights into `./pretrain`

We provide PyTorch distributed data parallel (DDP) training script `dist_train.sh`, for example, to train RepViT-M1 on an 8-GPU machine:
```
./dist_train.sh configs/mask_rcnn_repvit_m2_fpn_1x_coco.py 8
./dist_train.sh configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py 8
```
Tips: specify configs and #GPUs!

Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@
# optimizer
model = dict(
backbone=dict(
type='repvit_m2',
type='repvit_m1_1',
init_cfg=dict(
type='Pretrained',
checkpoint='pretrain/repvit_m2_distill_300.pth',
checkpoint='pretrain/repvit_m1_1_distill_300e.pth',
),
out_indices = [2,6,20,24]
),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,12 @@
# optimizer
model = dict(
backbone=dict(
type='repvit_m3',
type='repvit_m1_5',
init_cfg=dict(
type='Pretrained',
checkpoint='pretrain/repvit_m3_distill_300.pth',
checkpoint='pretrain/repvit_m1_5_distill_300e.pth',
),
out_indices=[4,10,30,34]
out_indices=[4, 10, 36, 42]
),
neck=dict(
type='FPN',
Expand Down
24 changes: 24 additions & 0 deletions detection/configs/mask_rcnn_repvit_m2_3_fpn_1x_coco.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
_base_ = [
'_base_/models/mask_rcnn_r50_fpn.py',
'_base_/datasets/coco_instance.py',
'_base_/schedules/schedule_1x.py',
'_base_/default_runtime.py'
]
# optimizer
model = dict(
backbone=dict(
type='repvit_m2_3',
init_cfg=dict(
type='Pretrained',
checkpoint='pretrain/repvit_m2_3_distill_450e.pth',
),
out_indices=[6, 14, 50, 54]
),
neck=dict(
type='FPN',
in_channels=[80, 160, 320, 640],
out_channels=256,
num_outs=5))
# optimizer
optimizer = dict(_delete_=True, type='AdamW', lr=0.0002, weight_decay=0.05) # 0.0001
optimizer_config = dict(grad_clip=None)
2 changes: 1 addition & 1 deletion detection/eval.sh
Original file line number Diff line number Diff line change
@@ -1 +1 @@
PORT=12345 ./dist_test.sh configs/mask_rcnn_repvit_m3_fpn_1x_coco.py det_pretrain/repvit_m3_coco.pth 8 --eval bbox segm
PORT=12345 ./dist_test.sh configs/mask_rcnn_repvit_m1_1_fpn_1x_coco.py det_pretrain/repvit_m1_1_coco.pth 8 --eval bbox segm
Loading

0 comments on commit b749e1f

Please sign in to comment.