Replies: 1 comment 2 replies
-
You cannot do this. You will need to merge your lora weight first and then train_from the merged model. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I tried to relaunch the finetuning with the same config but it restarted the training from scratch. Moreover, as the checkpoint directory only contains the lora weights, it may require some adaptations.
Beta Was this translation helpful? Give feedback.
All reactions