Feature translation between images using Generative Adversarial Networks (GANs). It allows to modify a physical characteristic such as the hair color.
This repository is being developed as part of the course Scalable Machine Learning and Deep Learning (ID2223) at KTH Royal Institute of Technology, in the Fall 17 P2 round.
Author | GitHub |
---|---|
Héctor Anadón | HectorAnadon |
Sergio López | Serlopal |
Mónica Villanueva | MonicaVillanueva |
TODO: folder structure
- name: Description
The idea of the project is to learn different features and translate them to images that originally do not have them, so that it generates images with modified using only one Generative Adversarial Network. We will try to reproduce the original concept, developed in the paper “StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation”, understand the underlying theory and compare the output agains the images provided in the paper.
Instead of using two datasets, we are simplifying the problem by using only one. In this case, the selected dataset is CelebA. We will train only a handful of the features that this dataset provides, namely hair color (black/blond/brown), gender (male/female) and age (young/old).
The code will be developed employing Tensor Flow, an open library for machine learning originally created by Google.
- Replicate the result of the papers using the same pictures and same feature change used by them.
- Test different results and study the veracity of the output.
There will not be an exhaustive quantitative evaluation. A qualitative approach will be carried out, comparing the obtained results with their counterparts on the original paper, in order to evaluate the success of the replication process.
Note:
- The final version of this repository will be available on January 2018.