Skip to content

A controllable generative model using a Convolutional Variational Autoencoder (VAE) with GUI to generate faces with desirable features.

License

Notifications You must be signed in to change notification settings

farhad-dalirani/Controllable-Face-Generation-VAE

Repository files navigation

Controllable Face Generation VAE

Alt Text

This project provides a controllable generative model using a Convolutional Variational Autoencoder to generate faces with desirable features. Moreover, it offers additional capabilities such as reconstructing input images, performing latent space arithmetic to enhance or remove attributes in generated images, and morphing pairs of human faces.

The model is trained on the CelebA dataset [1, 2], and TensorFlow Keras and Streamlit were used to create and train the generative neural network, as well as to develop the GUI for straightforward interactive face generation.

How to use

  • Download Celeba dataset and palce it in data folder, read data/data_link.txt. If want to use another directory, change config/config.json accordingly.
  • If desired, you can change parameters such as resolution, embedding size, weight of reconstruction loss compared to KL divergence loss, etc., in config/config.json.
  • Train model with python train_VAE.py
  • Obtain face attributes for latent space arithmetic with extract_vector_from_label.py
  • Open interactive GUI for controllable face generation with streamlit run gui.py

Demo

  1. Controllable face generation with selected attributes and intensity of features.

Alt Text

  1. Reconstruct faces by feeding them to a VAE.

Alt Text

  1. Perform Latent Space Arithmetic to observe the effect of weakening and strengthening a selected feature in generated faces.

Alt Text

  1. Morph pairs of faces gradually in multiple steps.

Alt Text

To Do

  • Replace the model with a more sophisticated one to obtain better results with higher quality.
  • Increase resolution of generated images after replacing current neural network.
  • Investigate the MS-Celeb-1M (MS1M) dataset after upgrading the model architecture.

Notice

The project is solely for educational or non-commercial research research purposes. For more information, please refer to the original CelebA license [1].

References

1 - CelebFaces Attributes (CelebA) Dataset Link 1 Link 2

2 - (CelebA) Dataset, S. Yang, P. Luo, C. C. Loy, and X. Tang, "From Facial Parts Responses to Face Detection: A Deep Learning Approach", in IEEE International Conference on Computer Vision (ICCV), 2015

3 - Auto-Encoding Variational Bayes, Diederik P Kingma, and Max Welling, Link

4- Beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework Link

5 - Variational AutoEncoder Link

6 - Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play, David Foster, O'Reilly Media; 2nd ed. edition Link

About

A controllable generative model using a Convolutional Variational Autoencoder (VAE) with GUI to generate faces with desirable features.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages