Skip to content

A Deep Motion Deblurring Network based on Per-Pixel Adaptive Kernels with Residual Down-Up and Up-Down Modules, A source code of the 3rd winner of NTIRE 2019 Video Deblurring Challenge

Notifications You must be signed in to change notification settings

hjSim/NTIRE2019_deblur

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

61 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Deep Motion Deblurring Network based on Per-Pixel Adaptive Kernels with Residual Down-Up and Up-Down Modules

Updates

  • Oct. 6th, 2020
    • We also provide our model output images of benchmark datasets; GOPRO and REDS(NTIRE,2019). Please refer to below

A source code of the 3rd winner of NTIRE 2019 Video Deblurring Challenge (CVPRW, 2019) : "A Deep Motion Deblurring Network based on Per-Pixel Adaptive Kernels with Residual Down-Up and Up-Down Modules" by Hyeonjun Sim and Munchurl Kim. [pdf], [NTIRE2019]

example
Examples of deblurring results on GOPRO dataset. (a) Input blurry image ; (b) Result of Tao et al. [2] ; (c) Result of our proposed network ; (d) Clean image.

Prerequisites

  • python 2.7
  • tensorflow (gpu version) >= 1.6 (The runtime in the paper was recorded on tf 1.6. But the code in this repo also runs in tf 1.13 )

Testing with pretrained model

update We also provide our model output images of benchmark datasets; REDS(NTIRE2019) and GOPRO. The output images are generated by our model trained on the corresponding training sets (REDS training sets and GOPRO training sets, respectively). Download link to NTIRE_test_output and GOPRO_test_output

We provide the two test models depending on the training datasets, REDS (NTIRE2019 Video Deblurring Challenge:Track 1 Clean dataset [pdf], [page]) and GOPRO ([pdf], [page]) with checkpoint in /checkpoints_NTIRE/, /checkpoints_GOPRO/, respectively. Download link to /checkpoints_NTIRE/ and /checkpoints_GOPRO/

For NTIRE REDS dataset, our model was trained on the 'blur' and 'sharp' pair.
For GOPRO dataset, our model was trained on the lineared blur (not gamma corrected) and sharp pair (as other state-of-the-art methods did).
For example, to run the test model pretrained on GOPRO dataset,

python main.py --pretrained_dataset 'GOPRO' --test_dataset './Dataset/YOUR_TEST/' --working_directory './data/'

or pretrained on NTIRE dataset with additional geometric self-ensemble (takes much more time),

python main.py --pretrained_dataset 'NTIRE' --test_dataset './Dataset/YOUR_TEST/' --working_directory './data/' --ensemble

test_dataset is the location of the test input blur frames that should follow the format:

├──── Dataset/
   ├──── YOUR_TEST/
      ├──── blur/
        ├──── Video0/
           ├──── 0000.png
           ├──── 0001.png
           └──── ...
        ├──── Video1/
           ├──── 0000.png
           ├──── 0001.png
           └──── ...
        ├──── ...

The deblurred output frames will be generated in working_directory as,

├──── data/
   ├──── test/
     ├──── Video0/
        ├──── 0000.png
        ├──── 0001.png
        └──── ...
     ├──── Video1/
        ├──── 0000.png
        ├──── 0001.png
        └──── ...
     ├──── ...

Evaluation

To calcuate PSNR between the deblurred output and the corresponding sharp frames,

python main.py --phase 'psnr'

Before that, the corresponding sharp frames should follow the format:,

├──── Dataset/
   ├──── YOUR_TEST/
      ├──── sharp/
        ├──── Video0/
           ├──── 0000.png
           ├──── 0001.png
           └──── ...
        ├──── Video1/
           ├──── 0000.png
           ├──── 0001.png
           └──── ...
        ├──── ...

Our NTIRE model had yielded average 33.86 and 33.38 dB PSNR for 300 validation frames with and without self-ensemble, respectively.
For GOPRO benchmark test dataset,

Method PSNR(dB) SSIM
Nah et al. [1] 28.62 0.9094
Tao et al. [2] 30.26 0.9342
Ours 31.34 0.9474

Reference

@inproceedings{sim2019deep,
  title={A Deep Motion Deblurring Network Based on Per-Pixel Adaptive Kernels With Residual Down-Up and Up-Down Modules},
  author={Sim, Hyeonjun and Kim, Munchurl},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops},
  year={2019}
}

Contact

Please send me an email, flhy5836@kaist.ac.kr

Reference

[1] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR, 2017.
[2] Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Jiaya Jia. Scale-recurrent network for deep image deblurring. In CVPR, 2018

About

A Deep Motion Deblurring Network based on Per-Pixel Adaptive Kernels with Residual Down-Up and Up-Down Modules, A source code of the 3rd winner of NTIRE 2019 Video Deblurring Challenge

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages