Skip to content

nagaraj-real/localaipilot-api

Repository files navigation

Open in GitHub Codespaces

Standalone Mode

In standalone (non-container) mode, the extension connects directly with an Ollama instance.

๐Ÿš€ Quick Start

1. Install Ollama on your machine from Ollama Website.

2. Pull local models

  • Chat Model

    ollama pull gemma:2b
  • Code Model

    ollama pull codegemma:2b

3. Update the mode as "Standalone" in the extension (Settings > Local AI Pilot > Mode).

Using different models for chat/code completion [Optional]

  • Configure model used for chat in the extension (Settings > Local AI Pilot > ollamaModel).
  • Configure model used for code completion in the extension (Settings > Local AI Pilot > ollamaCodeModel).

Container Mode

In Container Mode, the LLM API Container acts as a bridge between Ollama and the Extension, enabling fine grained customizations and advanced features like Document Q&A, Chat History(caching), Remote models.

Pre-requisites

๐Ÿš€ Quick Start

1. Start containers using docker compose

Download docker compose and start the services on demand.

docker-compose-cpu.yml | docker-compose-gpu.yml

docker compose -f docker-compose-cpu|gpu.yml up llmapi [ollama] [cache]

Container Services

  • llmapi : LLM API container service that connects the extension with Ollama. All configurations are available through ENV variables.
  • ollama [Optional] : Turn on this service for running Ollama as container.
  • cache [Optional] : Turn on this service for caching and searching chat history

Tip

Start with the llmapi service. Add other services based on your needs.

Configuring Docker Compose to connect with Ollama running on localhost (via ollama app)

docker compose -f docker-compose-cpu|gpu.yml up llmapi

# update OLLAMA_HOST env variable to point localhost(host.docker.internal)

2. Update the mode as "Container" in the extension. (Settings > Local AI Pilot > Mode)


๐Ÿ“˜ Advanced Configuration (Container Mode)

1. Chat History

Chat History can be saved in Redis by turning on the cache service. By default, the chats are cached for 1 hour, which is configurable in docker compose. This also enables searching previous chats via extension by keyword or chat ID.

docker compose -f docker-compose-cpu|gpu.yml up cache

2. Document Q&A (RAG Chat)

Start Q&A chat using Retrieval-Augmented Generation (RAG) and embeddings. Pull a local model to generate and query embeddings.

  • Embed Model

    ollama pull nomic-embed-text

Use Docker Compose Volume (ragdir) to bind the folder containing documents for Q&A. The embeddings are stored in volume (ragstorage).

3. Using a different Ollama model

  • Pull your preferred model from ollama model library

    ollama pull <model-name>
    ollama pull <code-model-name>
    ollama pull <embed-model-name>
  • Update model name in docker compose environment variable.

    Note: Local models are prefixed by the word "local/"

    MODEL_NAME: local/<model-name>
    CODE_MODEL_NAME: local/<code-model-name>
    EMBED_MODEL_NAME: local/<embed-model-name>

๐ŸŒ Remote models (Container Mode)

Remote models require API keys which can be configured in the Docker Compose file.

Supports the models of gemini, cohere, openai, anthropic LLM providers.

Update model name and model key in docker compose environment variables.

Turn down ollama service if it's running as it will not be used for remote inference.

docker compose down ollama

Supports {Provider}/{ModelName} format


Choosing Local Models

Models trained on large number of parameters (7b, 70b) are generally more reliable and precise. Though, small models like gemma:2b and phi3 have surprised everyone by delivering better performance. Ultimately, choosing the ideal local model depends on your system's resource capacity and model's performance.

Warning

Heavier models will require more processing power and memory.

Chat Models

You can choose any instruct model for chat. For better results, choose models that are trained for programming tasks.

gemma:2b | phi3 | llama3 | qwen2:1.5b | gemma:7b | codellama:7b

Code Completion Models

For code completion, choose code models that supports FIM (fill-in-the-middle)

codegemma:2b | codegemma:7b-code | codellama:code | codellama:7b-code | deepseek-coder:6.7b-base | granite-code:3b-base

Important

Instruct based models are not supported for code completion.

Embed Models

Choose any embed model


Running Ollama as container

docker compose -f docker-compose-cpu|gpu.yml up ollama

# update OLLAMA_HOST env variable to "ollama"

ollama commands are now available via docker.

docker exec -it ollama-container ollama ls

GPU support help