Desnapify is a deep convolutional generative adversarial network (DCGAN) trained to remove Snapchat filters from selfie images. It is based on the excellent pix2pix project by Isola et al., and specifically the Keras implementation by Thibault de Boissiere.
The following figure shows a few examples of the output of the Desnapify on doggy filter selfies. Top row is the input, middle row is the Desnapified output, bottom row is the target.
Desnapify currently includes a model trained to remove the doggy filter from images of size 512x512. It can be easily trained to remove other filters as well. This is left for future work.
- Python 3.5
- Tensorflow (tensorflow-gpu with cuDNN is recommended)
- Keras (Note: as of this writing the latest release of Keras 2.2.4 has a bug which prevents the model from being loaded. You must install the latest Keras from source as described here)
- Python modules listed in requirements.txt
-
Set up a virtualenv
mkvirtualenv -p python3.5 desnapify
-
Clone this repo
git clone https://github.com/ipsingh06/ml-desnapify.git
-
Install the python modules
cd ml-desnapify pip install -r requirements.txt
-
Install Keras from source as described here
This section describes how to use the provided trained model to remove the doggy filter from your images.
The provided script src/models/predict_model.py
takes care of scaling the images.
We will be using the samples provided in data/raw/samples/doggy
.
-
Download the pre-trained model weights.
python src/models/download_weights.py
-
Run the prediction script. This will cycle through and display the resulting output images alongside their inputs.
python src/models/predict_model.py \ models/doggy-512x512-v1/gen_weights_epoch030.h5 \ data/raw/samples/doggy \ --image_size 512 512
To store the results into images files, specify the
--output <path to directory>
option. You can also specify--no_concat
to store the output image only.python src/models/predict_model.py \ models/doggy-512x512-v1/gen_weights_epoch030.h5 \ data/raw/samples/doggy \ --image_size 512 512 \ --output out \ --no_concat
Check the
out
directory for output images.
This section describes how to train the model yourself. This requires generating pairs of images with and without the filter applied. The provided data generation script includes functionality to apply the doggy filter to images. It can be extended to apply other Snapchat filters as well.
The following figure shows the pipeline for generating training data from selfie images.
The provided model was trained with the People's Republic of Tinder dataset available on Kaggle. You should be able to use any dataset of selfie images for training.
Note: the training script uses multiprocess queues for batch loading, so you don't have to worry about fitting your entire dataset into memory. Use as large a dataset as you like.
-
Place the set of selfie images in
data/raw/dataset
directory. -
Run the data generation script to generate image-pairs with and without the doggy filter.
python src/data/make_dataset.py apply-filter \ data/raw/dataset \ data/interim/dataset \ --output_size 512 512 \ --no_preserve_dir
This script will create two directories
data/interim/dataset/orig
anddata/interim/dataset/transformed
. The first contains images to be used as input to the model, and the second images to be used as the target. -
Run the data generation script again to split the dataset into training, validation and testing sets. This also packages all three datasets into a single HDF5 file.
python src/data/make_dataset.py create-hdf5 \ data/interim/dataset \ data/processed/dataset.h5
This script will create the file
data/processed/dataset.h5
-
We should verify that the dataset was created properly.
python src/data/make_dataset.py check-hdf5 data/processed/dataset.h5
-
Finally, we can run the training script.
python src/models/train_model.py \ data/processed/dataset.h5 \ --batch_size 4 \ --patch_size 128 128 \ --epochs 30
We can see how well the model is learning and performing by looking at
current_batch_training.png
andcurrent_batch_validation.png
images inreports/logs
directory.We can also visualize the performance metrics using Tensorboard:
tensorboard --logdir=reports/logs
-
After training completes the model weights are saved in
models/
. To test the model, see the Using the provided model section.
-
pix2pix project by Isola et al.
-
Keras implementation of pix2pix by Thibault de Boissiere
-
The apply-filters script is based off snapchat-filters-opencv