Just want to say a MASSIVE thank you! #59
Replies: 1 comment
-
Glad you made it! In Flux cases, I strongly recommend updating your GPU driver to the latest possible. Then using CUDA 12.4 + PyTorch 2.4, they will do the magic. Check out the ComfyUI Examples and see if Schnell fp8 performs better: For reference, I'm using a Titan Xp 12GB (similar spec but half vram to P40). Passing Good luck! Edit (translated by chatgpt): I suggest that you identify the bottleneck, such as slow disk I/O, low memory, frequent paging, or potential workflow issues. Furthermore, I tested the example workflow more thoroughly. Based on the time reported as "Prompt executed in ??? seconds," the results were as follows:
I suspect this is related to GPU drivers, as the behavior differs between SD1.5 and SDXL generation. If possible, try using Flux on Windows to see if there's any performance difference. |
Beta Was this translation helpful? Give feedback.
-
I have been trying to get
Flux
working using comfyui.your project works but it's slow!
Here are the logs:
I am using
Nvidia Tesla P40 24G
card ...I had to pass the
--disable-cuda-malloc
in theCLI_ARGS
otherwise I was getting somesampler
error. (See here: comfyanonymous/ComfyUI#1845)thank you for your efforts...
(I am using cu121-megapack) because my CUDA version is 12.2...
Beta Was this translation helpful? Give feedback.
All reactions