Skip to content

Latest commit

 

History

History
123 lines (80 loc) · 5.34 KB

README.md

File metadata and controls

123 lines (80 loc) · 5.34 KB

SAGAN: Adversarial Spatial-asymmetric Attention

PWC

This is the official implementation of paper title "SAGAN: Adversarial Spatial-asymmetric Attention for Noisy Nona-Bayer Reconstruction". The paper has been accepted and to be published in the proceedings of BMVC21.

Download links : [Paper] | [arxiv] | [Supplemental] | [Presentation]

Please consider to cite this paper as follows:

@inproceedings{a2021beyond,
  title={SAGAN: Adversarial Spatial-asymmetric Attention for Noisy Nona-Bayer Reconstruction},
  author={Sharif, SMA and Naqvi, Rizwan Ali and Biswas, Mithun},
  booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
  pages={},
  year={2021}
}

Overview

Despite the substantial advantages, such non-Bayer CFA patterns are susceptible to produce visual artefacts while reconstructing RGB images from noisy sensor data. SAGAN addresses the challenges of learning RGB image reconstruction from noisy Nona-Bayer CFA comprehensively.

CFA

Overview

Nona-Bayer Reconstruction with Real-world Denoising

Real Reconstruction

Comparison with state-of-the-art deep JDD methods

comp

Prerequisites

Python 3.8
CUDA 10.1 + CuDNN
pip
Virtual environment (optional)

Installation

Please consider using a virtual environment to continue the installation process.

git clone https://github.com/sharif-apu/SAGAN_BMVC21.git
cd SAGAN_BMVC21
pip install -r requirement.txt

Testing with Synthesised Images

To inference with custom setting execute the following command:
python main.py -i -s path/to/inputImages -d path/to/outputImages -ns=sigma(s)
Here,-ns specifies the standard deviation of a Gaussian distribution (i.e., -ns=10, 20, 30),-s specifies the root directory of the source images (i.e., testingImages/), and -d specifies the destination root (i.e., modelOutput/).

Training

To start training we need to sampling the images according to the CFA pattern and have to pair with coresponding ground-truth images. To sample images for pair training please execute the following command:

python main.py -ds -s /path/to/GTimages/ -d /path/to/saveSamples/ -g 3 -n 10000
Here -s flag defines your root directory of GT images, -d flag defines the directory where sampled images should be saved, and -g flag defines the binnig factr (i.e., 1 for bayer CFA, 2 for Quad-Bayer, 3 for Nona-Bayer), -n defines the number of images have to sample (optional)


After extracting samples, please execute the following commands to start training:

python main.py -ts -e X -b Y To specify your trining images path, go to mainModule/config.json and update "gtPath" and "targetPath" entity.
You can specify the number of epoch with -e flag (i.e., -e 5) and number of images per batch with -b flag (i.e., -b 16).

For transfer learning execute:
python main.py -tr -e -b

Traaning with Real-world Noisy Images

To train our model with real-world noisy images, please download "Smartphone Image Denoising Dataset" and comment out line-29 of dataTools/customDataloader.py. The rest of the training procedure should remain the same as learning from synthesized images.

follow the training/data extraction procedure similar to the synthesized images.

** To inference with real-world Noisy images execute the following command:
python main.py -i -s path/to/inputImages -d path/to/outputImages -ns=0
Here,
-s** specifies the root directory of the source images (i.e., testingImages/), and -d specifies the destination root (i.e., modelOutput/).

A few real-world noisy images can be downloaded from the following link [Click Here]

Others

Check model configuration:
python main.py -ms
Create new configuration file:
python main.py -c
Update configuration file:
python main.py -u
Overfitting testing
python main.py -to

Contact

For any further query, feel free to contact us through the following emails: apuism@gmail.com, rizwanali@sejong.ac.kr, or mithun.bishwash.cse@ulab.edu.bd