Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to AutoencoderTiny in Diffusers Examples #3783

Open
olegchomp opened this issue Apr 6, 2024 · 4 comments
Open

Switch to AutoencoderTiny in Diffusers Examples #3783

olegchomp opened this issue Apr 6, 2024 · 4 comments
Labels
triaged Issue has been triaged by maintainers

Comments

@olegchomp
Copy link

olegchomp commented Apr 6, 2024

I'm trying to switch to AutoencoderTiny from AutoencoderKL in demo_txt2img_xl with Turbo model. After some attempts to changing models.py it finally works, but images comes with artifacts.

python demo_txt2img_xl.py "Einstein" --version xl-turbo --onnx-dir onnx-sdxl-turbo --engine-dir engine-sdxl-turbo --denoising-steps 1 --scheduler EulerA --guidance-scale 0  --width 512 --height 512

xl_base-Einstein-None-1-6187

@theNefelibata
Copy link

Did you modify the cfg value? The cfg of Taesd should be 1.0

@zerollzeng zerollzeng added the triaged Issue has been triaged by maintainers label Apr 12, 2024
@zerollzeng
Copy link
Collaborator

Sorry there is not much I can help here, but you can ping me if you have any TensorRT specific question or bugs.

@madebyollin
Copy link

Looks like VAE scaling factor may be hard-coded to .13? https://github.com/NVIDIA/TensorRT/blob/release/10.0/demo/Diffusion/demo_txt2img_xl.py#L131C12-L131C65 for TAESDXL the scaling factor should be 1.0

@lix19937
Copy link

the cfg of scale need change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

5 participants