Is there any reason not to use the tokenizer after applying LORA? #1735
-
The official guide for HuggingFace uses the tokenizer of the base model (before LORA is applied) for inference.
|
Beta Was this translation helpful? Give feedback.
Answered by
BenjaminBossan
May 16, 2024
Replies: 1 comment 2 replies
-
It is the same tokenizer. If you check the config that's loaded in this example, you'll see that https://huggingface.co/ybelkada/opt-6.7b-lora/blob/main/adapter_config.json#L2 |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
There is no "ybelkada/opt-6.7b-lora" tokenizer, the LoRA model uses the same tokenizer as the original model.