v0.9.8 - flux 24 gig training has entered the chat #639
Replies: 5 comments 1 reply
-
Nice, is it possible to cache the quantized model on disk for future runs when testing |
Beta Was this translation helpful? Give feedback.
-
Awesome work! Curious how well LORA on the "Dev" model works, any results already worth sharing? (Btw the documentation link doesn't work) |
Beta Was this translation helpful? Give feedback.
-
its really a good news!! So the question is, can you support single-computer, multi-GPU training?? |
Beta Was this translation helpful? Give feedback.
-
running the training now on a 6000 ada, whats the best way to run the lora after training completes with Flux? |
Beta Was this translation helpful? Give feedback.
-
Any discord for this tools? |
Beta Was this translation helpful? Give feedback.
-
Flux
It's here! Runs on 24G cards using Quanto's 8bit quantisation, or 25.7G on a Macbook system (slowly)!
If you're after accuracy, a 40G card will do Just Fine, with 80G cards being somewhat of a sweet spot for larger training efforts.
What you get:
What's Changed
New Contributors
Full Changelog: v0.9.7.8...v0.9.8
This discussion was created from the release v0.9.8 - flux 24 gig training has entered the chat.
Beta Was this translation helpful? Give feedback.
All reactions