Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker stuck at creating container #166

Open
aturevich opened this issue Jun 28, 2024 · 1 comment
Open

Docker stuck at creating container #166

aturevich opened this issue Jun 28, 2024 · 1 comment

Comments

@aturevich
Copy link

aturevich commented Jun 28, 2024

Hi, on win 11 inside wsl 2
running
docker run -it --rm -v ./ollama_files:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Getting stuck at
2024/06/28 11:29:42 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]" time=2024-06-28T11:29:42.101Z level=INFO source=images.go:730 msg="total blobs: 0" time=2024-06-28T11:29:42.103Z level=INFO source=images.go:737 msg="total unused blobs removed: 0" time=2024-06-28T11:29:42.103Z level=INFO source=routes.go:1111 msg="Listening on [::]:11434 (version 0.1.47)" time=2024-06-28T11:29:42.104Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2712116269/runners time=2024-06-28T11:29:44.796Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60101 cpu]" time=2024-06-28T11:29:44.812Z level=INFO source=types.go:98 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="15.4 GiB" available="14.1 GiB"

tried different versions of docker, no success so far

@samchenghowing
Copy link

total="15.4 GiB" available="14.1 GiB" Seems your system don't have enough memory, try to use a smaller model in ollama

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants