This is an PyTorch image deep style transfer library. It provies implementations of current SOTA algorithms, including
-
AdaIN (Artistic)
Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization (ICCV 2017)
-
WCT (Artistic)
-
LinearStyleTransfer (LST) (Artistic, Photo-Realistic)
Learning Linear Transformations for Fast Image and Video Style Transfer (CVPR 2019)
-
FastPhotoStyle (FPS, NVIDIA) (Photo-Realistic)
A Closed-form Solution to Photorealistic Image Stylization (ECCV 2018)
The original implementations can be found at AdaIN, WCT, LST and FSP.
With this library, as long as you can find your desired style images on web, you can edit your content image with different transferring effects.
- Linux
- PyTorch 1.4.0/0.4.1
- Nvidia-GPU and CUDA (for training only)
- Pretrained model. Please download it and unzip the file to your preferred model dir and modify the model dir path accordingly in the configuration file.
To run LST, PyTorch 0.4.1 version is required. We recommend users to install it in an anaconda virtual environment, since lots of functions in PyTorch 0.4.1 are deprecated. Details about setting and activating the virtual environment is described as follows:
- First create your own anaconda virtual environment
conda create -n your_env_name python=3.7
- Then install necessary packages
conda install pytorch=0.4.1 cuda92 -c pytorch
pip install opencv-python
conda install pillow=6.2.1
conda install scipy
pip install opencv--contrib-python
In addition, we need to compile the pytorch_spn module.
cd lib/SPN/pytorch_spn/
sh make.sh
cd ../../../
Modify model settings in the coressponding yaml file (configs/xxx_test.yaml or configs/xxx_train.yaml). Note that lst_spn_train.yaml, lst_spn_test.yaml and fps_photo_test.yaml are for photo-realistic style transfer only.
--resize
flag below is optional that can accelerate computing and save memory.
- For a single pair test
python StyleTransfer/tools/test.py --config-file StyleTransfer/configs/xxx_test.yaml --content path/to/content_image --style path/to/style_image [--resize]
- For large number of pair tests
python StyleTransfer/tools/test.py --config-file StyleTransfer/configs/xxx_test.yaml --contentDir path/to/content --styleDir path/to/style --mode 1 [--resize]
In the second case, we assume the names of paired content and style images are same.
Some examples are given as below:
StyleTransfer Library also supports the transferring or synthesis of multiple styles thourgh interpolation.
styleInterpWeights is the flag to specify interpolation weights, i.e., weight of each style image.
Note that currently only AdaIN supports style interpolation
python StyleTransfer/tools/test.py --config-file StyleTransfer/configs/xxx_test.yaml --content /path/to/content_image --style /path/to/style1_image,/path/to/style2_image,... --styleInterpWeights 10,10,... [--resize]
Below is an example of handling four styles.
python StyleTransfer/tools/test.py --config-file StyleTransfer/configs/adain_test.yaml --content demo/content/1.jpg --style demo/style/11.jpg,demo/style/12.jpg,demo/style/1.jpg,demo/style/in3.jpg --styleInterpWeights 0,0,0,100
The one-click global transfer still does not meet requirements from professinal users (e.g., artists) in many cases. Users prefer to transfer different styles to different regions in the content image, i.e., spatial control. StyleTransfer Library supports this operation.
Note that currently only AdaIN and WCT supports spatial control
python StyleTransfer/tools/test.py --config-file configs/xxx_test.yaml --content /path/to/content_image --style /path/to/style1_image,/path/to/style2_image --mask /path/to/mask_image [--resize]
Here, we provide an example of transferring two styles to the foreground and background respectively, i.e., Style I for foreground (mask=1), Style II for background (mask=0), provided a binary mask.
python tools/test.py --config-file configs/adain_test.yaml --content demo/mask/spatial_content.jpg --style demo/mask/mask_1.jpg,demo/mask/mask_2.jpg --mask demo/mask/mask.png
python tools/test.py --config-file configs/wct_test.yaml --content demo/mask/spatial_content.jpg --style demo/mask/mask_1.jpg,demo/mask/mask_2.jpg --mask demo/mask/mask.png
FPS generally provides two inference modes: FPS-Fast and FPS-Slow. FPS-Slow utilizes the prpogator (photo smoothing) described in the paper which is computationally expensive and slow. FPS-Fast replace that propogator with Guided Filter proposed by Kaiming He.
We found FPS-Fast shows a similar performance as FPS-Slow but is much faster.
- For a single pair test
python StyleTransfer/tools/test_photorealistic.py --config-file StyleTransfer/configs/lst_spn_test.yaml --content path/to/content_image --style path/to/style_image [--resize]
or
python StyleTransfer/tools/test_photorealistic.py --config-file StyleTransfer/configs/fps_photo_test.yaml --content path/to/content_image --style path/to/style_image [--resize]
- For large number of pair tests
python StyleTransfer/tools/test_photorealistic.py --config-file StyleTransfer/configs/lst_spn_test.yaml --contentDir path/to/content --styleDir path/to/style --mode 1 [--resize]
or
python StyleTransfer/tools/test_photorealistic.py --config-file StyleTransfer/configs/fps_photo_test.yaml --contentDir path/to/content --styleDir path/to/style --mode 1 [--resize]
Some examples are given below:
Our library also supports spatial control for photo-realistic style transfer. Basically, information of a semantic region in the style image is transferred to the corresponding semantic region in the content image.
python StyleTransfer/tools/test_photorealistic.py --config-file StyleTransfer/configs/lst_spn_test.yaml --content path/to/content_img --style path/to/style_img --content-seg /path/to/content_seg_img --style-seg /path/to/style_seg [--resize]
or
python StyleTransfer/tools/test_photorealistic.py --config-file StyleTransfer/configs/fps_photo_test.yaml --content path/to/content_img --style path/to/style_img --content-seg /path/to/content_seg_img --style-seg /path/to/style_seg [--resize]
- LST: support style interpolation and spatial control
- WCT: support style interpolation
- LST: support photo-realistic spatial control
If you'd like to cite StyleTransfer in your paper, you can use this bibtex:
@misc{Alen2019,
author = {Yang Gao},
title = {PyTorch Style Transfer Library},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/AlenUbuntu/StyleTransfer}},
}