-
Notifications
You must be signed in to change notification settings - Fork 354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to enable multiple LoRA adapters? #576
Comments
hey would this collab notebook help: https://colab.research.google.com/github/Adapter-Hub/adapter-transformers/blob/master/notebooks/Parallel_Adapter_Inference.ipynb |
Hi @StephennFernandes , thanks for the reply. However, what I need is more like "Stack" rather than this "Parallel" composition for LoRA, see this doc. When I tried to activate two LoRA adapters Any guidance about how to revise this package to support the above feature would be greatly appreciated, thanks! |
Hi @Jaja612, The LoRA paper suggests merging LoRA adapters to not change the inference time: "Our simple linear design allows us to merge the trainable matrices with the frozen weights when deployed, introducing no inference latency compared to a fully fine-tuned model, by construction." (Lora: Low-Rank Adaptation of Large Language Models by Hu et al.) Other modular combinations, such as stacking, are thus not in line with the paper's original idea and thus we did not yet implement them. |
Hi, thanks for this great project! I am wondering what should I revise to support one forward pass with multiple LoRA adapters. It seems to be a straightforward extension, but the current version doesn't support stack (activate) multiple LoRA adapters.
Any guidance and help would be appreciated, thanks! @calpt
The text was updated successfully, but these errors were encountered: