Skip to content

Commit

Permalink
chore: Minor code cleanup. Updated Docker environment file.
Browse files Browse the repository at this point in the history
  • Loading branch information
anirbanbasu committed Aug 18, 2024
1 parent 1c3dc31 commit 2b79cfb
Show file tree
Hide file tree
Showing 4 changed files with 52 additions and 8 deletions.
41 changes: 39 additions & 2 deletions .env.docker
Original file line number Diff line number Diff line change
@@ -1,5 +1,42 @@
# Copyright 2024 Anirban Basu

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

# http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Application-specific Gradio configuration
GRADIO__SERVER_HOST="0.0.0.0"
GRADIO__SERVER_HOST = "0.0.0.0"

# Ollama URL assuming that it is on the Docker host
LLM__OLLAMA_URL="http://host.docker.internal:11434"
LLM__PROVIDER = "Ollama"

# Anthropic
LLM__ANTHROPIC_API_KEY = "your-anthropic-api-key"
LLM__ANTHROPIC_MODEL = "claude-3-opus-20240229"

# OpenAI
LLM__OPENAI_API_KEY = "your-openai-api-key"
LLM__OPENAI_MODEL = "gpt-4o-mini"

# Cohere
LLM__COHERE_API_KEY = "your-cohere-api-key"
LLM__COHERE_MODEL = "command-r-plus"

# Groq
LLM__GROQ_API_KEY = "the-groq-api-key"
LLM__GROQ_MODEL = "llama-3.1-70b-versatile"

# Ollama
LLM__OLLAMA_URL = "http://host.docker.internal:11434"
# The model must be available in the Ollama installation.
LLM__OLLAMA_MODEL = "mistral-nemo"

LLM__TEMPERATURE = "0.0"
15 changes: 9 additions & 6 deletions src/coder_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,10 @@ def solve(self, state: AgentState) -> dict:
]
else:
inputs[constants.AGENT_STATE__KEY_EXAMPLES] = constants.EMPTY_STRING
response = self.pydantic_to_ai_message(self._solver_agent.invoke(inputs))

llm_response = self._solver_agent.invoke(inputs)
ic(llm_response)
response = self.pydantic_to_ai_message(llm_response)
ic(response)
return (
{
Expand Down Expand Up @@ -238,6 +241,11 @@ def evaluate(self, state: AgentState) -> dict:
)
]
}
num_test_cases = len(test_cases) if test_cases is not None else 0
if num_test_cases == 0:
return {
constants.AGENT_STATE__KEY_STATUS: constants.AGENT_NODE__EVALUATE_STATUS_NO_TEST_CASES
}
try:
# Extract the code from the tool call.
code: str = json_dict[constants.PYDANTIC_MODEL__CODE_OUTPUT__CODE]
Expand All @@ -255,11 +263,6 @@ def evaluate(self, state: AgentState) -> dict:
)
]
}
num_test_cases = len(test_cases) if test_cases is not None else 0
if num_test_cases == 0:
return {
constants.AGENT_STATE__KEY_STATUS: constants.AGENT_NODE__EVALUATE_STATUS_NO_TEST_CASES
}
succeeded = 0
test_results = []
# TODO: Enable parallel execution of test cases.
Expand Down
2 changes: 2 additions & 0 deletions src/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,7 @@
First, output a `reasoning` through the problem and conceptualise a solution. Whenever possible, add a time and a space complexity analysis for your solution.
Then, output a `pseudocode` in Pascal to implement your concept solution.
Finally, a well-documented working Python 3 `code` for your solution. Do not use external libraries. Your code must be able to accept inputs from `sys.stdin` and write the final output to `sys.stdout` (or, to `sys.stderr` in case of errors).
Please format your response as JSON, using `reasoning`, `pseudocode`, and `code` as attributes.
Optional examples of similar problems and solutions (may not be in Python):
{examples}
Expand All @@ -145,6 +146,7 @@
First, output a `reasoning` through the problem and conceptualise a solution. Whenever possible, add a time and a space complexity analysis for your solution.
Then, output a `pseudocode` in Pascal to implement your concept solution.
Finally, a well-documented working Python 3 `code` for your solution. Do not use external libraries. Your code must be able to accept inputs from `sys.stdin` and write the final output to `sys.stdout` (or, to `sys.stderr` in case of errors).
Please format your response as JSON, using `reasoning`, `pseudocode`, and `code` as attributes.
Optional examples of similar problems and solutions (may not be in Python):
{examples}
Expand Down
2 changes: 2 additions & 0 deletions src/webapp.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,8 @@ def __init__(self):
default_value=constants.ENV_VAR_VALUE__LLM_PROVIDER,
)
if self._llm_provider == "Ollama":
# ChatOllama does not support structured outputs: https://python.langchain.com/v0.2/docs/integrations/chat/ollama/
# Yet, the API docs seems to suggest that it does: https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ollama.ChatOllama.html#langchain_community.chat_models.ollama.ChatOllama.with_structured_output
self._llm = ChatOllama(
base_url=parse_env(
constants.ENV_VAR_NAME__LLM_OLLAMA_URL,
Expand Down

0 comments on commit 2b79cfb

Please sign in to comment.