Implementation of paper (https://arxiv.org/abs/1511.05644) for my own research in Python 3 using Pytorch Library
MNIST data set ( dimensions of each image 28 * 28 * 1)
Adversial Autoencoder has 3 types of network A ) Encoder B) Decoder C) Discriminator
Encoder Network compresses the image in into bottleneck layer (Assuming the input to network are coorelated),it learns Latent Features.
Decoder Network takes input from encoder network, it reconstructs the image from bootleneck layer.
Discriminator Network distinguses between fake data and real data.
We are using Bivarient Normal distribution as our Prior distribution.
ENCODER 784 ==> 400 ==> 100 ==> 2
DECODER 2 ==> 100 ==> 400 ==> 784
DISCRIMINATOR 2 ==> 10 ==> 10 ==> 2
If you look at the way the network has trained you will find encoder and discriminator are competing with each other which forces the output from bottle neck layer to follow prior distribution
Model was trained on Google Colaboratory.