Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Planger/minor terminal ai fixes #13978

Conversation

planger
Copy link
Contributor

@planger planger commented Jul 31, 2024

What it does

Just two minor improvements of the AI Terminal Chat feature / code.

How to test

Nothing really to test.

Follow-ups

N/A

Review checklist

Reminder for reviewers

eneufeld and others added 30 commits May 13, 2024 13:38
- add ui
- add openai integration
- introduce ChatResponseParts
- LanguageModelProvider can be used in both backend and frontend
  - frontend access is generically implemented independent of the
    actual LanguageModelProvider implementation
- split code into four packages:
  - ai-agent: containing the AgentDispatcher. At the moment just
    delegates to the LanguageModelProvider. Can run in both frontend
    and backend
  - ai-chat: only containing the UI part of the chat.
  - ai-model-provider: containing the infrastructure of the
    LanguageModelProvider and its frontend bridge
  - ai-openao: only contains the Open AI LanguageModelProvider
Implements the LanguageModelProviderRegistry which is able to handle
an arbitrary number of LanguageModelProviders.

Refactors the LanguageModelProvider to only return a simple text or
stream of text. It's now the agent's responsibility to convert this
into response parts. Therefore the interfaces are also moved to the
agent package.

The LanguageModelProviderRegistry implementation for the frontend
handles all LanguageModelProvider registered in the frontend as well as
in the backend.

Fixes the StreamNode in the tree-widget to update itself correctly
when new tokens arrive.
Introduces ChatModel, including nested ChatRequestModel and
ChatResponseModel to represent chat sessions. The chat models allow to
inspect and track requests and their responses.

Also introduces the ChatService which can be used to manage chat
sessions and sending requests.

Architecture is inspired by the VS Code implementation, however it
intends to be more generic.
Change-Id: I179432698332ff52b33aba7b1f7e203f2bee9c77
Change-Id: I80d33303ceadf940f17265b7d910a5c13b59ec89
eclipsesource/osweek-2024#47

Change-Id: Ib9dd82e3ba062990f5642883bc9439aca52931ad
Change-Id: I186190dede14d729992977c2805e2c07100c2d17
Change-Id: Ia257c9a65b5f2bb9aa3e9ccc506f6394d744ff8f
Change-Id: I419790bda5497433bb2ba3f562b1121391cd7da0
Logs LanguageModel requests and their results to a separate output
channel per LanguageModel.
- rename the open button to 'Select Folder'
- set the default folder name to 'prompt-templates'
- check if a template with a given id was overridden
- adapt calls to return the overridden template if so
AlexandraBuzila and others added 21 commits July 29, 2024 17:35
- add temporary test command
- implement initial cutomization service reading the templates on
preferences change
- no file watching yet
Review and adapt prompt templates
Fixes an issue with circular injections when using the PromptService
in an agent.
Fixes a circular dependency by removing prompt collection. Instead the
PromptService is filled programatically on start.
Co-authored-by: Alexandra Buzila <abuzila@eclipsesource.com>
Co-authored-by: Olaf Lessenich <olessenich@eclipsesource.com>
Adds a new ai-code-completion Theia extension which provides the
CodeCompletionAgent. The agent is integrated via a
CompletionItemProvider into Monaco for all files.

The extension offers two preferences to enable/disable the feature as
well as control its behavior.

Co-authored-by: Stefan Dirix <sdirix@eclipsesource.com>
Change-Id: Ie7f4cfc1923db5afbaef6455089ad5cb21107db7
The language model selection value should be initialized with the value
from settings if it exists
Implements:
- Copy
- Insert at cursor
- Monaco Editor
- Navigating to the location of the file (if provided)


Co-authored-by: Lucas Koehler <lkoehler@eclipsesource.com>
- Ensure we always create a variable part even for undefined variables
-- Prompt text will then default to user text (including '#')

- Allow adopters to register resolvers with priority
-- Given a particular variable name, argument and context

- Automatically resolve all variable parts in a chat request
-- Ensure parts always provide a matching prompt text

- Make sure variable service is part of core
-- Generic variable handling for all agents and UI layers
-- Chat-specific variable handling only in the chat layer
-- Provide example of 'today' variable

Fixes eclipsesource/osweek-2024#46
Co-authored-by: Christian W. Damus <cdamus.ext@eclipsesource.com>
Added a view for displaying all the configured llamafiles.
Configured llamafiles can be started and killed.
One llamafile can be set as active, then being used in the chat.
The chat integration is currently hardcoded to use the active llamafile language model.
This should be changed as soon as the chat integration has a dropdown to select the language model (#42).
A follow up will be created to describe the next steps.
Change-Id: I0b9d48e5aa31862a725c73e4f4e2576163e1e47b
@planger planger closed this Jul 31, 2024
@planger planger deleted the planger/minor-terminal-ai-fixes branch July 31, 2024 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.