Flux LoRA Training Setting #973
Replies: 4 comments 1 reply
-
can you elaborate on the differences in the configuration? |
Beta Was this translation helpful? Give feedback.
-
at least for person lora training: export LEARNING_RATE=9e-5 works very good for me |
Beta Was this translation helpful? Give feedback.
-
well, that is a PEFT LoRA. it has had problems in the past with resuming training and still has issues with multigpu training when quantised. LyCORIS LoKr is superior to PEFT LoRA, it just needs a higher learning rate. so chances are you will be happier with the new defaults. you just need to experiment a bit. |
Beta Was this translation helpful? Give feedback.
-
see this report I prepared on some of the benefits of LoKr and how to get the most out of it. |
Beta Was this translation helpful? Give feedback.
-
Hi, thank you for this fantastic repository! I've been using it for my Flux LoRA training for a couple of weeks now, and it has been working great.
I recently noticed that you made significant changes to the Flux Quickstart documentation. I decided to try out the new settings, but in my case, I found that the old recommendations worked better for me. Specifically, I’m training a LoRA for person images, and with the previous settings, my LoRA converged in around 1500 steps, producing excellent results.
I was wondering if we could discuss this further? It might be helpful to understand the rationale behind the new settings and whether the previous approach is still valid for certain cases. (In my case, I decide to continue with the previous settings)
Thanks again for your amazing work!
Beta Was this translation helpful? Give feedback.
All reactions