Marc Botet Colomer1,2* Pier Luigi Dovesi3*† Theodoros Panagiotakopoulos4 Joao Frederico Carvalho 1 Linus Härenstam-Nielsen5,6 Hossein Azizpour2 Hedvig Kjellström2,3 Daniel Cremers5,6,7 Matteo Poggi8
1 Univrses 2 KTH 3 Silo AI 4 King 5 Technical University of Munich 6 Munich Center of Machine Learning 7 University of Oxford 8 University of Bologna
* Joint first authorship. † Part of the work carried out while at Univrses.
📜 arxiv 💀 project page 📽️ video
If you find this repo useful for your work, please cite our paper:
@inproceedings{colomer2023toadapt,
title = {To Adapt or Not to Adapt? Real-Time Adaptation for Semantic Segmentation},
author = {Botet Colomer, Marc and
Dovesi, Pier Luigi and
Panagiotakopoulos, Theodoros and
Carvalho, Joao Frederico and
H{\"a}renstam-Nielsen, Linus and
Azizpour, Hossein and
Kjellstr{\"o}m, Hedvig and
Cremers, Daniel and
Poggi, Matteo},
booktitle = {IEEE International Conference on Computer Vision},
note = {ICCV},
year = {2023}
}
For this project, we used Python 3.9.13. We recommend setting up a new virtual environment:
python -m venv ~/venv/hamlet
source ~/venv/hamlet/bin/activate
In that environment, the requirements can be installed with:
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full==1.3.7 # requires the other packages to be installed first
All experiments were executed on a NVIDIA RTX 3090
Cityscapes: Please, download leftImg8bit_trainvaltest.zip and
gt_trainvaltest.zip from here
and extract them to /data/datasets/cityscapes
.
Rainy Cityscapes: Please follow the steps as shown here: https://team.inria.fr/rits/computer-vision/weather-augment/
If you have troubles creating the rainy dataset, please contact us in domain-adaptation-group@googlegroups.com to obtain the Rainy Cityscapes dataset
We refer to MMSegmentation for further instructions about the dataset structure.
Prepare the source dataset:
python tools/convert_datasets/cityscapes.py /data/datasets/Cityscapes --out-dir data/Cityscapes --nproc 8
For convenience, it is possible to run the configuration by selecting experiment -1. If wandb is configurated, it can be activated by setting the wandb argument to 1
python run_experiments.py --exp -1 --wandb 1
All assets to run a training can be found here.
Make sure to place the pretrained model mitb1_uda.pth
in pretrained/
.
We provide a config.py
file that can be easily modified to run multiple experiments by changing parameters. Make sure to place the random modules to random_modules/
.
This code is based on MMSegmentation project. The most relevant files are:
- online_src/domain_indicator_orchestrator.py: Implementation of the Adaptive Domain Detection.
- online_src/online_runner.py: Runner for Hamlet.
- online_src/buffer.py: Buffer sampling methods
- mmseg/models/segmentors/modular_encoder_decoder.py: Implementation of HAMT
- mmseg/models/decode_heads/incremental_decode_head.py: Handle the lightweight decoder
- mmseg/models/decode_heads/segformer_head.py: Implementation of the lightweight decoder on SegFormer
- mmseg/models/backbones/mix_transformer.py: Implementationf of modular freezing for HAMT
- mmseg/models/uda/dacs.py: UDA method using hamlet strategies
- mmseg/core/evaluation/eval_hooks.py: Evaluation methods
This project is based on the following open-source projects.