My independent implementation of Deepfake. This is only pet-project.
- Put source video named 'data_src.mp4' into 'data/src/src_video/' folder.
- Put destination video named 'data_dst.mp4' into 'data/dst/dst_video/' folder.
- Use preprocess_data.py for extracting faces, metadata, frames and make data augmentation from "data/training_data/src/src_video/data_src.mp4" folder into "data/training_data/src/src_video_faces/faces/face_images/" , "data/training_data/src/src_video_faces/faces/face_info/" and "data/training_data/src/src_video_faces/frames/" directories. This module makes the same operation for dst folder.
- Train GAN from train_gan.py module. Epochs and batch_size you can change inside main function. There are all model files in 'data/models/' folder.
- Use predict_faces.py to generate faces from train model. Generated images you can find in 'data/predictions/' directory.
- Use face_swap.py to take original frames, faces`s metadata and predicted images to swap and write in 'data/swapped_frames/' folder.
- Use fake_video_maker.py to create final fake video and save it in 'data/deep_fake_video/' folder.
- The Deepfake-faces pet-project works well only with videos where faces can be ease recognized.
- This code works only with one face in video. REMEMBER THIS.
- Batch_size should be not less then 5 and no more then 32.
- Amount of global epochs should be 1000 - 2000.