-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Error when merging LoRA weights】 #25
Comments
Hi @Luo-Z13, Thank you for your interest in our work. The error you are getting because you have not added the PAD token and resized the tokenizer embeddings. Please ensure that you are using the Specifically, the pad token is added at Line 65 in b93d9c8
|
Hello, I checked my file and I have copied it using Line 64 in b93d9c8
so maybe this error is caused by something else? |
Hello, I try to conduct inference using the provided weight directly:
And the warning also appears:
What may be the possible reasons? |
Hello, I load pre-trained llava-llama3 SFT weights and fine-tune using LoRA, but get an error when merging weights:
scripts:
Training:
merge lora:
Error:
The text was updated successfully, but these errors were encountered: