Replies: 4 comments 2 replies
-
Technically I think this is possible, you can merge adapters that you want to merge by specifying peft/src/peft/tuners/lora/layer.py Line 267 in bffbbbf merged_adapters attribute of LoraLayer : https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py#L48
You can simply use that list to retrieve LoRA weights from |
Beta Was this translation helpful? Give feedback.
-
What I am currently looking for is this option as there are many models e.g. WizardLM, dolphine, openhermes, etc are all models with merged lora weights (merge + unload). I am interesting in this as s-lora and lora-x helps to schedule how the lora adapters will be used, and I only need to host one base model and multiple lora (which takes up significantly smaller amount of GPU memory). |
Beta Was this translation helpful? Give feedback.
-
You could use peft/src/peft/utils/save_and_load.py Lines 41 to 43 in 26504a0 |
Beta Was this translation helpful? Give feedback.
-
I implemented something in this direction using singular value decomposition (SVD). I call it LoRD for Low-Rank Decomposition |
Beta Was this translation helpful? Give feedback.
-
Is there any way to extract lora adapter from merged model?
Recently, there have been a marvelous improvement in deploying multiple LoRA. I wonder is there any existing code that could be use to extract the merged weights back into LoRA adapter to make use of the existing multi lora inference engine, e.g. lorax and storage.
Beta Was this translation helpful? Give feedback.
All reactions