Replies: 3 comments 2 replies
-
Hi, this is not the way. Could you try something like this instead? model = get_peft_model(model, lora_config, adapter_name="A")
# add 2nd adapter, "A" is still the active adapter
model.add_adapter("B", profile_lora_config)
# now train adapter "A"
...
# switch to adapter "B" and train it
model.set_adapter("B")
... |
Beta Was this translation helpful? Give feedback.
0 replies
-
@BenjaminBossan Thanks for the reply. However adapter A and B are trained simultaneously since loss depends on both outputA and outputB. Is there a way to achieve that? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I want to train 2 LoRA models in conjunction on my dataset. I don't want gradients from one model impact the other. However, since the base model is the same I am confused if just setting adapter_name like below would achieve this purpose?
And then in the model forward
Beta Was this translation helpful? Give feedback.
All reactions