Replies: 1 comment
-
Hi @tubedude! It sounds like each step in the process is quite different in purpose. It sounds to me like the right approach is to setup a different chain for each purpose, with a separate System message, desired output type and everything. And yes, some LLMs will error if more than one System message is provided, so replacing it is the right approach. In a previous version of the the JS LangChain library, they had a SequentialChain (link to the Python version) object which was kind of what you're talking about. Like "run chain 1, take the output of that into chain 2 and run that, and so on". The benefit was adding "memory" that was tracked between the chains which might be what you're looking for. Not sure. In some ways I disliked the approach of formalizing it into an object and structure like that because it was more complex and yet another API to learn and maintain. |
Beta Was this translation helpful? Give feedback.
-
I’m working on a large document analysis using LangChain and different APIs, including Anthropic’s Claude and OpenAI. I’ve encountered an issue with updating the system message during a conversation.
When using OpenAI with LangChain, I initially added a new system message to the messages list, but only the first system message was respected. Now, I remove and re-add the system message as needed. I still think changing the system message is a good approach for my situation, but I would appreciate your thoughts on this.
Additionally, I face a similar problem with json_response. In a single conversation, I need some responses in JSON and others in plain text (I use JSON for short structured responses and plain text to stream longer text replies). It’s important to keep the message list consistent between these response types. Currently, I recreate the entire LLMChain, adjust the json_response, and re-add the previous messages.
Directly editing the LLMChain struct felt inappropriate, and recreating the LLMChain seemed more suitable. Should we add tools to edit these settings within the chain, or should we allow options in LLMChain.run/2 to override some settings?
Beta Was this translation helpful? Give feedback.
All reactions