-
Notifications
You must be signed in to change notification settings - Fork 305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor: wtype per tensor from file instead of global #455
base: master
Are you sure you want to change the base?
Conversation
Trying master again after having these changes implemented for a while it feels like this reduces the loading times, but maybe there's somthing else affecting it. |
well, it removes conversions. I had this today where i loaded flux with q8_0 t5 and f16 clip, and was wondering why t5 was using f16 (including ram usage). Turns out sd.cpp can only have one conditioner wtype rn... |
You probably saw the embedding models. |
6167c1a
to
cb46146
Compare
I can confirm now, this PR makes loading weights very much faster for larger models. (Results including warm runs only, so the models are always in disk cache)
I think this makes this PR worth merging. |
But how do the other performance metrics change. Also does it all work? |
I think so.
Diffusion/sampling and vae performance are within margin of error. Prompt encoding is significantly faster, when mixing quantizations. Edit: Photomaker (V1 and V2) works. LoRAs work too (on CPU and without quantization on Vulkan). |
I'm not sure if it makes a significant difference yet.