-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error generating autocompletion with Qwen2.5-Coder-7B and vllm #2388
Comments
Ran into this same issue. |
Found a workaround to get some completions.
Namely the tabAutocompleteOptions template, and the model provider being openai with the completion options stop including the two entries. |
Switched from TGI to vLLM containers and ran into
|
same issue |
1 similar comment
same issue |
Hi all, thanks for the detailed write-ups and +1s. We've had some other problems with autocomplete not working for folks and are planning to focus on bugfixes shortly. Added this one to our list of issues. |
same problem, any update on this? |
Before submitting your bug report
Relevant environment info
Description
The
tabAutoComplete
feature is not displaying any suggestions in the VS Code editor.To reproduce
"GET /v1/models HTTP/1.1" 200 OK
whenever theconfig.json
is modified.Expected Behavior
Auto-completion suggestions should appear in the VS Code editor.
Actual Behavior
vllm
server received"POST /v1/completions HTTP/1.1" 200 OK
but nothing show on VsCode Editor.VsCode Console displayed
Error generating autocompletion: TypeError: Cannot read properties of undefined (reading 'includes')
Additional Observations
After this error occurs, the Continue extension no longer sends POST /v1/completions requests to the vllm server.
Log output
The text was updated successfully, but these errors were encountered: