Unable to merge_and_unload combined model generated from fine-tuning llama-2 models with prompt tuning methodology. #1140
-
I am currently fine-tuning llama-2 models (7b, 13b, and 70b) with prompt tuning methodology using the peft library. My peft_config is defined as.
After training, I was able to combine the base model and the generated adapter as show below;
but I could only save the adapter to disk and not the combined model. It seems to me that the "merge_and_unload" function in peft is only applicable to LoRA or am I missing something Is there a way to save the combined model to disk as a standalone model? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
|
Beta Was this translation helpful? Give feedback.
-
Hi @adekunleoajayi |
Beta Was this translation helpful? Give feedback.
-
Hello @BenjaminBossan @younesbelkada Thank you for your reply. Just to be clear, are you saying that this issue is not a lack of feature on the peft library but rather a constraint from prompt tuning methodology ? and the problem being that, the method generates a soft prompt parameters that can only be added to the model input embeddings and hence this is not injectable into the base model? |
Beta Was this translation helpful? Give feedback.
merge_and_unload
works for some adapters (LoRA, IA³, LoHa, etc.) but not for prompt learning.