Replies: 1 comment 2 replies
-
Could you please provide the full code (minus anything that's unrelated)? Otherwise, it's impossible for us to help you debug. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Has anyone had success with LoRA training with Dolly_v2_7B? Or is this even practical?
I am able to perform training on a raw text file and it says everything is successful. I can even apply the LoRA and it says that its successful but no matter how aggressive I make the training I don't see any difference in the model output.
When I try perplexity evaluation there is no difference in the result between having the LoRA applied or not.
If I look at the terminal I see "Applying the following LoRAs to dolly_v2_7b: lora" but nothing after that. Should a success indicator be expected on the terminal?
Model is loaded with transformers, No quantization. Running on a combination of Tesla M40, and Tesla K80 for a total of 36GB of VRAM with 3 GPUs. Yes I know they are outdated. No idea if that could be related to my issue.
Beta Was this translation helpful? Give feedback.
All reactions