Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PoC: add chat template heuristics #1283

Open
wants to merge 15 commits into
base: concedo_experimental
Choose a base branch
from

Conversation

kallewoof
Copy link

@kallewoof kallewoof commented Dec 24, 2024

The fallback chat template adapter of Vicuna is not ideal in some cases (e.g. a test against a sub-portion of the BBC news classification task on Kaggle gave an 82% accuracy with Vicuna and 88% with the official ChatML format for a q4_k_m Qwen 2.5 3B-Instruct gguf).

This PR adds a proof of concept simple heuristic which looks at the chat template and upgrades the adapter when it is able to.

  • Determine where this should go. koboldcpp.py seems big enough as it is, but the project does not seem to split into .py files often/at all.
  • Extend the cases to cover more chat templates, e.g. Llama 3.1, Mistral versions, etc.

Alternative approach: expose the llama chat template mechanism and use that.

The fallback chat template adapter of Vicuna is not ideal in some cases (e.g. a test against a sub-portion of the BBC news classification task on Kaggle gave an 82% accuracy with Vicuna and 88% with the official ChatML format for a q4_k_m Qwen 2.5 3B-Instruct gguf).

This PR adds a proof of concept simple heuristic which looks at the chat template and upgrades the adapter when it is able to.
@LostRuins
Copy link
Owner

The problem is the chat template cannot be trusted. It is set by unknown third parties, and very often straight up incorrect or misleading - that's the whole reason why I didn't use the jinja template to begin with. I'm alright with frontends using the /props endpoint to make their own pick, but I'm not sure overwriting the default is a good idea.

The reason why the default is Alpaca (not vicuna in fact) has to do with the fact that \n### Instruction:\n is a tokenizer resistant format. When using ChatML, for example, if the vocab doesn't have a <|im_start|> and <|im_end|> added tokens, the sequence gets split into an awful mess of tokens like [<,|,im,_,start,|,>] which severely degrades the output quality. Meanwhile, Alpaca, being plain English, is relatively unaffected, and the newline ensures that tokenizers with huge vocabs like Llama3 don't end up merging part of the instruct sequence into the input text tokens.

@kallewoof
Copy link
Author

kallewoof commented Dec 25, 2024

I see. When I was evaluating a bunch of different models against a test, I was completely unaware of the fact they were all using Alpaca (sorry, mixed them up) and when I switched, there was a significant increase in accuracy across the board. I understand not wanting to trust third parties willy nilly. Perhaps there can be a flag that lets users choose whether or not they want to trust it or not?

Edit: We can also add built in guard rails to deal with fuckery like unknown tokens. I.e. for every chat template profile, we include a list of tokens that must be present in the tokenizer. If any are missing, the chat template is not adopted.

@LostRuins
Copy link
Owner

Well, I think there's a nice way to do it.

image

What I can suggest instead is we create a new dummy file called AutoGuess.json as a built-in-adapter for chat completion adapters. This can be optionally selected by the user when they wish to apply heuristics.

image

(they can also simply type AutoGuess)

Then, the heuristics will apply.

@kallewoof
Copy link
Author

kallewoof commented Dec 25, 2024

Sounds good. I assume this can be selected via the Open AI API chat completion endpoint as well?

I do think we may want to include an option to default to using it as well, but that's not a blocker (I just have to add it to my requests). Edit: it looks like people can do --chatcompletionsadapter AutoGuess which is fine I suppose.

@kallewoof
Copy link
Author

I added two new prints on startup, which come after the text model load:

  1. Notifies user about failure to detect the chat completion adapter, and that Alpaca will be used. This is when heuristics are enabled, but no applicable heuristic was found.
  2. Notifies user about Alpaca being used for OAI API chat completions, for when no --chatcompletionsadapter was provided.

@kallewoof kallewoof force-pushed the 202412-ct-heuristics branch from 9de98ca to b45380f Compare December 25, 2024 05:02
@kallewoof
Copy link
Author

Giving this some thought, I think this would do well as a json file with search string array -> chat template name + params. Then a for loop that checks each in the code. We could probably even use the AutoGuess.json file for this.

@kallewoof
Copy link
Author

Better! Any adapter json file can now be a list with dicts with search, name, and adapter keys which will be searched for. Not sure how useful that is, but hey. This seems like a pretty seamless integration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants