-
-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy path.env
34 lines (32 loc) · 2.08 KB
/
.env
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# Traefik Variables
TRAEFIK_IMAGE_TAG=traefik:3.2
# Set the log level (DEBUG, INFO, WARN, ERROR)
TRAEFIK_LOG_LEVEL=WARN
# The email address used by Let's Encrypt for renewal notices
TRAEFIK_ACME_EMAIL=admin@example.com
# The hostname used to access the Traefik dashboard and to configure domain-specific rules
TRAEFIK_HOSTNAME=traefik.ollama.heyvaldemar.net
# Basic Authentication for Traefik Dashboard
# Username: traefikadmin
# Passwords must be encoded using MD5, SHA1, or BCrypt https://hostingcanada.org/htpasswd-generator/
TRAEFIK_BASIC_AUTH=traefikadmin:$$2y$$10$$sMzJfirKC75x/hVpiINeZOiSm.Jkity9cn4KwNkRvO7hSQVFc5FLO
# Ollama Variables
# For configurations using AMD GPUs, use ollama/ollama:rocm
OLLAMA_IMAGE_TAG=ollama/ollama:0.3.12
OLLAMA_HOSTNAME=ollama.heyvaldemar.net
# Define the models to be installed in the Ollama service. Separate each model name with a comma
# Recommended sources for finding models compatible with Ollama:
# 1. Ollama Official Website - https://ollama.com/library: Visit the official Ollama website for detailed information
# on available models and how to use them with the platform
# 2. Hugging Face Model Hub - https://huggingface.co/models: Ollama supports many models from Hugging Face
# You can search and explore models across various categories such as NLP, vision, and more
# 3. Open WebUI - https://openwebui.com: While primarily designed for Open WebUI, it hosts a range of models
# that might also be adaptable for use with Ollama. Ensure compatibility before use
OLLAMA_INSTALL_MODELS=llama3,codegemma,mistral
# Number of GPUs to allocate to the Ollama service.
# Specify '1' to use a single GPU, ideal for environments where resources are shared or for less GPU-intensive tasks
# Use 'all' to allocate all available GPUs, which is suitable for high-performance tasks that benefit from parallel computing across multiple GPUs
# Make sure to uncomment and configure the corresponding GPU section in the YAML file for NVIDIA GPU support if you wish to use your GPU for processing
OLLAMA_GPU_COUNT=all
# Open WebUI Variables
WEBUI_IMAGE_TAG=ghcr.io/open-webui/open-webui:0.3