PEFT Lora transfer adapters between pipelines #1162
alejobrainz
started this conversation in
General
Replies: 1 comment
-
Hi, I honestly don't fully understand what you're doing. Could you maybe show some (simplified) code that illustrates what you're doing? In general, it is possible to merge the LoRA weights into the base model ( |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi. We are creating a system that makes inferences on the fly using different models. At present we are caching the pipeline.components which includes text encoders, unets and vae from different models and loading them on the fly by instancing a new pipeline based on the component cache stored in memory. I wanted to know if there is a similar way to load and cache a Lora adapter and apply it on the fly across different models without needing to load the weights on each cached model.
Beta Was this translation helpful? Give feedback.
All reactions