You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use gpt4 and sometimes when my message is pretty long and I start running into this error below once, but I then adjust my input so that it is within he token length of 8192, the prompt will go through but it'll then not produce a very long response, e.g here and resurface that error
happened to me twice now.
Request Error. The last prompt was not saved: <class 'openai.error.InvalidRequestError'>: This
model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens.
Please reduce the length of the messages.
This model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
File "/Users/[username]/Projects/gpt-cli/gptcli/session.py", line 101, in _respond
for response in completion_iter:
File "/Users/[username]/Projects/gpt-cli/gptcli/openai.py", line 20, in complete
openai.ChatCompletion.create(
File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens. Please reduce the length of the messages.
The text was updated successfully, but these errors were encountered:
The token limits include both the prompt and the response together, there is no way to make gpt-4 process more than 8192 tokens (gpt-4-32k can do 32768). Theoretically, we could trim the beginning of the first message, but that might not be desirable either, because that context at the beginning would be lost.
ok so is this also the case for ChatGPT though? I noticed that I ran into the token issue e.g. for chatbotui.com but I feel like I never really ran into this limitation on chat.openai.com, or are they just really good at not making that clear to the user?
I use gpt4 and sometimes when my message is pretty long and I start running into this error below once, but I then adjust my input so that it is within he token length of 8192, the prompt will go through but it'll then not produce a very long response, e.g here and resurface that error
happened to me twice now.
The text was updated successfully, but these errors were encountered: