Skip to content

Simple, unified interface to multiple Generative AI providers and local servers

License

Notifications You must be signed in to change notification settings

mobarski/ai-bricks-v4

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Bricks

AI Bricks provide minimalistic toolbox for creating Generative AI based systems.

Features:

  • unified interface to multiple Generative AI providers
  • strong support for local model servers (KoboldCpp, LMStudio, Ollama, LlamaCpp, tabbyAPI, ...)
  • ability to record requests and responses (in sqlite) and generate reports (ie usage)
  • configuration driven approach to prompt templates (yaml + jinja2)
  • minimal dependencies (requests, yamja)

AI Bricks focuses on providing basic building blocks rather than a complex framework. This aligns with research showing that the most successful LLM implementations use simple, composable patterns rather than complex frameworks. By keeping things minimal, it allows developers to build exactly what they need without unnecessary abstractions.

Examples

Example from the aisuite repo:

import aibricks
client = aibricks.client()

models = ["openai:gpt-4o", "xai:grok-beta"]

messages = [
    {"role": "system", "content": "Respond in Pirate English."},
    {"role": "user", "content": "Tell me a joke."},
]

for model in models:
    response = client.chat(
        model=model,
        messages=messages,
        temperature=0.75
    )
    print(response['choices'][0]['message']['content'])

Common Vision API

import aibricks
client = aibricks.client()

models = [
    "openai:gpt-4o",
    "xai:grok-vision-beta",
    "anthropic:claude-3-5-sonnet-latest",
    "lmstudio:moondream2",
]

messages = [
    {"role": "user", "content": [
        {"type": "text", "text": "what these two images have in common?"},
        {"type": "image_url", "image_url": {"url": "https://example.com/image1.jpg"}},
        {"type": "image_url", "image_url": {"url": "file://path/to/image2.jpg",
                                            "detail": "high"}},
    ]},
]

for model in models:
    response = client.chat(
        model=model,
        messages=messages,
    )
    print(response)

Minimalistic implementation of MemGPT-like agent

This example can be found in the examples/memgpt directory.

Supported providers

Provider Example Connection String Environmental Variables Notes
OpenAI openai:gpt-4o-mini OPENAI_API_KEY
Anthropic anthropic:claude-3-5-haiku-latest ANTHROPIC_API_KEY
XAI xai:grok-beta XAI_API_KEY
Google google:gemini-1.5-flash GEMINI_API_KEY
DeepSeek deepseek:deepseek-chat DEEPSEEK_API_KEY
OpenRouter openrouter:openai/gpt-4o OPENROUTER_API_KEY
ArliAI arliai:Llama-3.1-70B-Tulu-2 ARLIAI_API_KEY
Together together:google/gemma-2b-it TOGETHER_API_KEY 🚧🚧🚧
HuggingFace huggingface:meta-llama/Meta-Llama-3-8B-Instruct-Turbo HUGGINGFACE_API_KEY 🚧🚧🚧
Ollama ollama:qwen2.5-coder:7b - GGUF
LMStudio lmstudio:qwen2.5-14b-instruct - GGUF
dynamic model loading
KoboldCpp koboldcpp - GGUF
LlamaCpp llamacpp - GGUF
tabbyAPI tabbyapi TABBYAPI_API_KEY EXL2, GPTQ
GPT4All gpt4all:Reasoner v1 - GGUF
buggy
vLLM vllm:/opt/models/qwen2.5-coder-3b-instruct-q4_0.gguf - GGUF
dummy dummy -

License

MIT

Installation

Don't. The project is still in its infancy.

References

Chat API docs

Vision API docs

Local model servers

Local model servers (TODO)

About

Simple, unified interface to multiple Generative AI providers and local servers

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages