resuming training from LoRA checkpoint fails when base model is quantised #659
Labels
help wanted
Extra attention is needed
regression
This bug has regressed behaviour that previously worked.
upstream-bug
We can't do anything but wait.
workaround is to continue training it without the base being quantised but obviously that's difficult-to-impossible.
the bug is seemingly upstream in PEFT.
The text was updated successfully, but these errors were encountered: