Stable Diffusion Inference using Intel® Extension for TensorFlow.
Use Case | Framework | Model Repo | Branch/Commit/Tag | Optional Patch |
---|---|---|---|---|
Inference | Tensorflow | keras-cv | 66fa74b6a2a0bb1e563ae8bce66496b118b95200 | patch |
Note: Refer to CONTAINER.md for Stable Diffusion Inference instructions using docker containers.
- Host has Intel® Data Center GPU Flex Series
- Host has installed latest Intel® Data Center GPU Flex Series Driver https://dgpu-docs.intel.com/driver/installation.html
- Install Intel® Extension for TensorFlow
-
git clone https://github.com/IntelAI/models.git
-
cd models/models_v2/tensorflow/stable_diffusion/inference/gpu
-
Create virtual environment
venv
and activate it:python3 -m venv venv . ./venv/bin/activate
-
Run setup.sh
./setup.sh
-
Install tensorflow and ITEX
-
Setup required environment paramaters
Parameter export command PRECISION export PRECISION=fp16
(fp32 or fp16) -
Run
run_model.sh
Output typically looks like:
50/50 [==============================] - 8s 150ms/step
latency 153.37058544158936 ms, throughput 6.520155068331838 it/s
Start plotting the generated images to ./images/fp16_imgs_50steps.png
Final results of the training run can be found in results.yaml
file.
results:
- key: throughput
value: 6.520155068331838
unit: it/s