By Zijun Deng, Lei Zhu, Xiaowei Hu, Chi-Wing Fu, Xuemiao Xu, Qing Zhang, Jing Qin, and Pheng-Ann Heng.
This repo is the implementation of "Deep Multi-Model Fusion for Single-Image Dehazing" (ICCV 2019), written by Zijun Deng at the South China University of Technology.
The dehazing results can be found at Google Drive.
Make sure you have Python>=3.7
installed on your machine.
Environment setup:
-
Create conda environment
conda create -n dm2f conda activate dm2f
-
Install dependencies (test with PyTorch 1.8.0):
-
Install pytorch==1.8.0 torchvision==0.9.0 (via conda, recommend).
-
Install other dependencies
pip install -r requirements.txt
-
-
Prepare the dataset
-
Download the RESIDE dataset from the official webpage.
-
Download the O-Haze dataset from the official webpage.
-
Make a directory
./data
and create a symbolic link for uncompressed data, e.g.,./data/RESIDE
.
-
Set the path of pretrained ResNeXt model in resnext/config.py- Set the path of datasets in tools/config.py
- Run by
python train.py
The pretrained ResNeXt model is ported from the official torch version,
using the convertor provided by clcarwin.
You can directly download the pretrained model ported by me.
Use pretrained ResNeXt (resnext101_32x8d) from torchvision.
Hyper-parameters of training were set at the top of train.py, and you can conveniently change them as you need.
Training a model on a single GTX 1080Ti TITAN RTX GPU takes about 4 5 hours.
- Set the path of five benchmark datasets in tools/config.py.
- Put the trained model in
./ckpt/
. - Run by
python test.py
Settings of testing were set at the top of test.py
, and you can conveniently
change them as you need.
DM2F-Net is released under the MIT license.
If you find the paper or the code helpful to your research, please cite the project.
@inproceedings{deng2019deep,
title={Deep multi-model fusion for single-image dehazing},
author={Deng, Zijun and Zhu, Lei and Hu, Xiaowei and Fu, Chi-Wing and Xu, Xuemiao and Zhang, Qing and Qin, Jing and Heng, Pheng-Ann},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={2453--2462},
year={2019}
}