You can train finetune your GFPGAN-1024 model with your own dataset! inputs:512 -> outputs:1024
You can use my model to train. It contains everything!
original | gfpgan | gfgan-1024
pip install -r requirements.txt
- prepare ffhq-1024 data
- Collect your own pictures and align
- Do image supersession through open APIs like here
- get facial landmark to enhance eyes and mouth
git clone git@github.com:LeslieZhoa/LVT.git
- download model
- change LVT file
- change landmark model file
- change image root
- change save root
- run
cd process; python get_roi.py
refer GFPGAN to download
- GFPGANv1.4.pth
- GFPGANv1_net_d_left_eye.pth
- GFPGANv1_net_d_mouth.pth
- GFPGANv1_net_d_right_eye.pth
- arcface_resnet18.pth
- get vgg model here
- get discriminator here which is transformed from original stylegan2
put these model into pretrained_models
change dataset path in model/config.py
self.img_root -> ffhq data root
self.train_hq_root -> your own 1024 data root
self.train_lq_root -> your own lq data root
self.train_lmk_base -> train lmk by get_roi.py
self.val_lmk_base -> val lmk by get_roi.py
self.val_lq_root -> val lq data root
self.val_hq_root -> val hq data root
set self.mode = 'decoder' in model/config.py
train util you think it is ok.
set self.mode = 'encoder' and self.pretrain_path from stage 1 in model/config.py
train util you think it is ok.
set self.mode = 'encoder' and self.pretrain_path from stage 2 in model/config.py
use early stop.
stage 1 && stage 2 -> python train.py --batch_size 2 --scratch --dist
stage 3 -> python train.py --batch_size 2 --early_stop --dist
Support multi node multi gpus training
Can multi batch
python utils/convert_pt.py