You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation automatically prepends "User:" and "Assistant:" to messages when parsing conversations in parse_conversation(). This creates formatting issues when integrating with LLMs that have their own prompt templates and conversation formats.
In parse_conversation(), messages are formatted as:
This is done only for multi-turn conversations, because several of the approaches that are implemented do a multi-turn conversation within. It doesn’t change the format of the messages as the response is always an open ai compatible messages object. The only thing that happens is that the initial multi-turn message is converted to a single turn message which is intentional to allow the implemented approaches to work.
The current implementation automatically prepends "User:" and "Assistant:" to messages when parsing conversations in
parse_conversation()
. This creates formatting issues when integrating with LLMs that have their own prompt templates and conversation formats.In
parse_conversation()
, messages are formatted as:This forces a specific conversation format regardless of the LLM's requirements or user's intended prompt template.
In my opinion the proxy should maintain message content as-is without adding role prefixes, allowing:
The text was updated successfully, but these errors were encountered: