-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Debounce, LlamaCpp support, expose prompt as setup option, fix passing parameters to model (ollama) #11
base: main
Are you sure you want to change the base?
Conversation
Do not generate completion on every key press, but wait some min time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, I think this should be split into 2 PRs.
First one, which looks pretty good, should be the ollama part.
Second one should concentrate on the debounce implementation.
end) | ||
end | ||
|
||
function Source:trigger(ctx, callback) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not comfortable with this entire debounce concept.
First, I dont think it is need here. cmp already has a debounce implementation.
Second, I think this implementation is wrong; there should not be a global autocommand which handles the debounce.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Built in debounce in cmp - has issue where it is not working as it should:
- to my understanding, it should show completion popup x mseconds after last key press in insert mode
- But what it does - it shows completion x ms after typing in first letter in insert mode.
I tested this with 2 second delay, and cmp will not wait for last key press.
In my implementation it will wait till last key is pressed - thus it wont spin my gpu fans as much. I'm not sure if the implementation is ok, I just copied someone else code. I know that copilot - cmp extension is also using its own debounce code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, looking at cmp sources I can only agree.
+1 for fixing the ollama params, without this, you can't change it to a remote host. |
@@ -6,21 +6,21 @@ function Ollama:new(o, params) | |||
o = o or {} | |||
setmetatable(o, self) | |||
self.__index = self | |||
self.params = vim.tbl_deep_extend('keep', params or {}, { | |||
self.params = vim.tbl_deep_extend('keep', o or {}, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this bug apply to the openai
or bard
backends as well? They similarly use params
instead of o
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not know, since I do not used Bard and openai. I can only assume that if they work ok, then this line is not needed.
@JoseConseco |
I will have to google, how do I split this into multiple pr - with one file per PR. |
The point here is that I do not believe debounce should be implemented inside this plugin, as cmp already implements debounce. |
@tzachar let me know if all is ok now. |
see pending issues in the review. |
|
+1 for fast merge. Maybe the debounce stuff should be done as a PR to |
Waiting for the last issue to be resolved. |
Hi,
This pr fixes #8
And gives u ability to customize prompt, and adds debounce delay:
debounce_delay - request will be send after x ms, after last key press.
And added support for Llama Server