Confused about vectors/token length. #862
bbecausereasonss
started this conversation in
General
Replies: 1 comment
-
Are you certain it's not just sending the whole file? 14k lines could easily go north of 150k tokens, so it would seem as though the whole file is being used rather than blocks. Currently the block parser is specific to markdown, so even if you managed to get blocks to be used, they would probably be malformed. A parser for handling code-blocks is something I expect to be added in the future. In the meantime, you might just be able to drop the first half of the file in ChatGPT o1 and ask if the bug exists. If not, then ask it to inspect the second half |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a very long vbs script im trying to debug, I thought creating vectors would help me be able to stay within the limit of the token count for say OpenAI, but im constantly getting API errors like:
API Error: This model's maximum context length is 128000 tokens. However, your messages resulted in 156209 tokens (155997 in the messages, 212 in the functions). Please reduce the length of the messages or functions.
So a bit confused as to why the embedding/vector-space is sending so many tokens! It's 14899 lines of code.
Beta Was this translation helpful? Give feedback.
All reactions