Skip to content
This repository has been archived by the owner on May 7, 2021. It is now read-only.
/ noisy-cifar-100-rdn Public archive

A denoising baseline implementation of a residual dense network trained on CIFAR-100 with added Gaussian noise

License

Notifications You must be signed in to change notification settings

aptlin/noisy-cifar-100-rdn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A denoising baseline implementation of a residual dense network trained on CIFAR-100 with added Gaussian noise

Model (Paper | Demo)

Residual dense block

Residual Dense Block

Residual dense network

Residual Dense Network

Architecture overview

With greater depth, convolutional layers would capture hierarchical features with different receptive fields. Direct extraction of the output from each layer in the low-quality space is expensive and impractical in a very deep network. The authors remedy these limitations by introducing residual dense networks (RDNs) with a contiguous memory mechanism, which allows to preserve global context across the layers, and techniques for local feature fusion (LFF), which helps to utilize the local state of the layers and preserve accumulated shallow features. These deep and shallow features are then combined into global dense features via global feature fusion.

RDN for denoising consists of four parts: shallow feature extraction network (SFENet), residual dense blocks (RDBs) dense feature fusion. Two convolutional layers extract the shallow features. The features from the first convolutional layer are then not only processed for further extraction of shallow features, but also for global residual learning. The shallow features then pass through a series of residual dense blocks that extract hierarchical features. Each output from RDBs is then concatenated, and these global features are passed through two convolutional layers. Combined with the shallow features from the first learning step, the result of the global feature fusion then goes through the final convolutional layer to yield the required result.

The main difference from previous approaches to denoising is a contiguous memory (CM) mechanism, which is leveraged through dense connected layers, local feature fusion and local residual learning.

First of all, the state of all of the preceding RDBs is passed to each layer of the current RDB, which captures local features.

Local feature fusion is achieved via a 1x1 convolutional layer that controls the output information, which output is then added to the state of the previous RDB, ensuring local residual learning.

For exploitation of hierarchical local features on the global level, the authors suggest dense feature fusion, which consists of global feature fusion and then global residual learning. Global feature fusion processes the concatenated states of all RDBs through the composite filters of 1x1 and 3x3 convolutions. The result is then added to the shallow features obtained after the first layer for global residual learning.

Demo results

Note that in the graph below the PSNR of the predicted noise is measured instead of the PSNR of the denoised image.

Results

noise_mean noise_std Link
0.010 0.010 Training log
0.010 0.255 Training log
0.010 0.500 Training log
0.255 0.010 Training log
0.255 0.255 Training log
0.255 0.500 Training log
0.500 0.010 Training log
0.500 0.255 Training log
0.500 0.500 Training log

About

A denoising baseline implementation of a residual dense network trained on CIFAR-100 with added Gaussian noise

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published