-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Key mismatch when trying to load a LORA adapter into an XLORA model #2132
Comments
Yes, I can confirm that it's not working. I condensed the example a little bit and switched to the normal BERT model: import torch
from transformers import AutoModelForSequenceClassification
from peft import get_peft_model, LoraConfig, TaskType, XLoraConfig
model_name = "google-bert/bert-base-uncased"
lora_config_sentiment = LoraConfig(
task_type=TaskType.SEQ_CLS,
init_lora_weights=False,
)
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)
peft_model = get_peft_model(model, lora_config_sentiment)
lora_path = "/tmp/peft/2132"
peft_model.save_pretrained(lora_path)
del model, peft_model
# Load pre-trained model and tokenizer
xlora_model = AutoModelForSequenceClassification.from_pretrained(model_name, use_cache=False)
xlora_peft_config = XLoraConfig(
task_type="SEQ_CLS",
adapters={
"adapter_1" : lora_path,
},
)
# Apply XLoRA to the model: this raises an error
model = get_peft_model(xlora_model, xlora_peft_config) The error I get is:
@EricLBuehler could you please take a look? |
could it be a while until this is addressed? |
@EricLBuehler Do you know if you have time to take a look at this soon? |
Hi there,
i just add if key.startswith("model.model") or key.startswith("model."): key = key.replace("model.","") in the line 135 of model.py, but not sure it is an effective way, hoping it is helpful |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
not stale |
System Info
peft==0.13.0
Who can help?
@EricLBuehler
Information
Tasks
examples
folderReproduction
Code to train a the sample LORA
Code to load the trained LORA into an XLORA model
Expected behavior
The expected behaviour would be that the LORA adapter should successfully integrate into the XLORA model.
The problem arises from the function
_load_adapter_into_lora_model
inside thesrc/tuners/xlora/model.py
file.The function mentioned above adds an extra
model.
prefix to the keys inside thestate_dict
of the adapter model.The text was updated successfully, but these errors were encountered: