Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove hard-coded User/Assistant prefixes from conversation parsing #112

Open
ElioDonato opened this issue Dec 1, 2024 · 1 comment
Open

Comments

@ElioDonato
Copy link

The current implementation automatically prepends "User:" and "Assistant:" to messages when parsing conversations in parse_conversation(). This creates formatting issues when integrating with LLMs that have their own prompt templates and conversation formats.

In parse_conversation(), messages are formatted as:

if role == 'user':
    conversation.append(f"User: {text_content}")
elif role == 'assistant':
    conversation.append(f"Assistant: {text_content}")

This forces a specific conversation format regardless of the LLM's requirements or user's intended prompt template.

In my opinion the proxy should maintain message content as-is without adding role prefixes, allowing:

  • Users to control their own prompt templates
  • Direct compatibility with various LLM conversation formats
  • Clean integration with different chat models
@codelion
Copy link
Owner

codelion commented Dec 1, 2024

This is done only for multi-turn conversations, because several of the approaches that are implemented do a multi-turn conversation within. It doesn’t change the format of the messages as the response is always an open ai compatible messages object. The only thing that happens is that the initial multi-turn message is converted to a single turn message which is intentional to allow the implemented approaches to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants