Welcome to this multi-LLM tool integration codebase! This repository provides a powerful system for integrating Python functions with various LLM providers like OpenAI, Anthropic, Gemini, and Groq.
- Dynamic Schema Generation: Automatically converts Python functions into JSON schemas compatible with major LLM providers
- Multi-LLM Support: Seamlessly connects with OpenAI, Anthropic, Gemini, and Groq through a unified interface
- Transparent Processing: Handles all message flows, tool calls, and responses dynamiclly by passing in functions you want to use
The system allows you to easily create tools from Python functions and use them with the supported LLM providers. It includes features like parallel tool execution, conversation logging, and a simple interface for registering new functions.
Example showing parallel and chained tool calls working together
This documentation will guide you through:
- Understanding the system architecture
- Setting up your environment
- Implementing your own tools
- Integrating with different LLM providers
- Overview of Key Components
- Directory Structure
- Installation and Setup
- Usage Guide
- Adding Your Own Tools / Functions
- Details of Each Module
- Example Snippets
- Contributing
- License
This projectβs main functionality resides in bridging Python-based βtoolsβ (i.e., functions) with various LLM APIs. To accomplish this, we have:
- ToolConverter: A utility class that dynamically converts Python functions into JSON schemas for OpenAI-style function calls, then adapts those schemas to Anthropic, Gemini, and Groq formats.
- LLMHandler: A high-level orchestrator that sends messages to an LLM, handles tool calls, and returns the final text response to the user. It integrates with a chosen LLM client (OpenAI, Anthropic, Gemini, Groq) through a standard interface (the base API).
- LLM API Wrappers: Each LLM vendor (OpenAI, Anthropic, Gemini, Groq) is wrapped in a dedicated module, conforming to a unified interface (defined in base_api.py).
- MessageHandler: Manages message formatting, storing system/user/tool messages in a standardized structure.
With these components, you can easily add your own Python functions (tools) that an LLM may call to perform tasks such as math calculations, file I/O, or anything else your application needs.
Hereβs a simplified structure of the repository:
.
βββ images/
β βββ parallel_tool_calls.png # Example of conversation
β βββ simple_conversation.png # Example of conversation
βββ functions/
β βββ math_tools.py # Example math functions (tools)
βββ llm_api/
β βββ base_api.py # Abstract base interface for LLM API classes
β βββ openai_api.py # Wrapper for OpenAI
β βββ anthropic_api.py # Wrapper for Anthropic
β βββ gemini_api.py # Wrapper for Gemini
β βββ groq_api.py # Wrapper for Groq
βββ llm_tools/
β βββ llm_handler.py # Main orchestrator class for LLM usage
β βββ message_handler.py # Handles message creation and formatting
β βββ conversation_printers.py # Handles printing the conversation
βββ tool_converter.py # ToolConverter: generates JSON schemas
βββ main_test.py. # A test example for all llm providers testing paralell and chained tools
βββ main.py # Example driver script that ties everything together
-
Clone this repository:
git clone https://github.com/kristofferv98/Agent_Nexus.git cd Agent_nexus
-
Create and activate a virtual environment (optional, but recommended):
python3 -m venv venv source venv/bin/activate
-
Install dependencies (example using pip):
pip install -r requirements.txt
-
Set your desieres environment variables for LLM provider credentials (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, GROQ_API_KEY).
export OPENAI_API_KEY= "your-api-key" export ANTHROPIC_API_KEY= "your-api-key" export GEMINI_API_KEY= "your-api-key" export GROQ_API_KEY= "your-api-key"
The ToolConverter takes in a list of Python functions and generates a schema that the LLM can use to understand arguments, parameter validation, and usage. (uses openai gpt-4o to genrate schemas for all functions in paralell)
Youβll find an example in:
class ToolConverter():
...
def generate_schemas(self, functions: List[Callable]) -> dict:
...
When you provide a list of functions (e.g., [subtract_numbers, add_numbers, multiply_numbers]), it returns a dictionary containing OpenAI, Anthropic, Gemini, and Groq schemas for those functions.
The LLMHandler manages the conversation loop. Whenever the LLM attempts to call a tool, LLMHandler intercepts the request and calls the actual Python code. Unified for all llms.
For example, in:
class LLMHandler:
...
def register_functions(self, functions: List[Callable]):
for func in functions:
self.register_function(func)
...
By calling register_functions([your_func_1, your_func_2])
, you make those tools available to any LLM you choose to integrate.
An example usage is in βmain.pyβ. It:
- Initializes ToolConverter.
- Gathers your tools (e.g., travel time functions).
- Generates the schemas for all LLM flavors.
- Instantiates an LLM client (e.g., OpenAI) and an LLMHandler.
- Registers the tools and sets the system prompt.
- Sends user messages to the LLM, which can call the newly registered tools as needed.
- Returns a final answer after tool execution
You can run:
python main.py
You can run:
python main_test.py
Adjust the flags in the script (run_openai, run_anthropic, run_groq, run_gemini) to choose which LLM(s) to test.
- Import or define new Python functions.
- Give them docstrings and type annotations (if possible) for clarity and better schema generation.
- Pass them to the
ToolConverter.generate_schemas()
method. - Register them with your LLMHandler instance.
For example, if you have:
def greet_user(name: str) -> str:
"""Greets a user by name."""
return f"Hello, {name}!"
Add it to the code flow in βmain.pyβ (or another script) similarly to how math_tools are used. If complex or complex logic create simple abstractions of more complex functions to simplify schema generation
-
tool_converter.py
- Responsible for converting Python functions into JSON schemas.
- The key method is
generate_schemas()
, which returns a dictionary containing schemas for multiple LLM providers.
-
llm_api/base_api.py
- Abstract base class defining the uniform
generate(messages, tools)
method. - All LLM API wrappers must subclass BaseLLMAPI.
- Abstract base class defining the uniform
-
llm_api/openai_api.py, anthropic_api.py, gemini_api.py, groq_api.py
- Concrete wrappers implementing each providerβs unique request/response pattern.
-
llm_tools/message_handler.py
- Normalizes messages into a standard format.
- Manages user text blocks, system prompts, tool calls, and image data.
-
llm_tools/llm_handler.py
- The core orchestrator that calls the LLM, detects βtool_useβ instructions, and executes the corresponding Python functions.
- Consolidates final text responses.
-
functions/math_tools.py
- Sample set of math functions (tools) that demonstrate how to integrate your logic into the system.
-
main.py
- A reference script showing how everything ties together.
- Instantiates the ToolConverter, creates schemas, picks an LLM, registers tools, sets the prompt, and interacts with the user.
Hereβs a snippet that shows how you might add a custom function βgreet_userβ to your main script:
from tool_converter import ToolConverter
from llm_tools.llm_handler import LLMHandler
from llm_api.openai_api import OpenAIAPI
def calculate_dog_age(human_years: int) -> str:
"""Converts human years to approximate dog years using the common rule."""
dog_years = human_years * 7
return f"{human_years} human years is approximately {dog_years} dog years!"
if __name__ == "__main__":
converter = ToolConverter()
# Include your custom function
functions = [calculate_dog_age]
# Generate schemas
schemas = converter.generate_schemas(functions)
print("Schemas:", schemas["gemini"]) # Example: print the Gemini schema
# Create LLM client
gemini_client = GeminiAPI(model_name="gemini-2.0-flash-exp")
# Use the handler
llm_handler = LLMHandler(openai_client)
llm_handler.register_functions(functions)
llm_handler.set_tools(schemas["openai"])
# Set system instructions (optional)
llm_handler.set_system_prompt("You can convert human years to dog years with calculate_dog_age tool.")
# Send user message
response = llm_handler.send_user_message("I'm 25 years old, how old would I be as a dog?")
print("LLM Response:", response)
We welcome issue reports, feature requests, and pull requests. Please open a GitHub Issue first to discuss significant changes or additions.
This project is licensed under the MIT License - see the LICENSE file for details.
Enjoy building with this multi-LLM tool integration system!