Multi-GPU Support #1168
Replies: 2 comments 1 reply
-
I think just change the number of GPU in the docker compose @mattwebbio |
Beta Was this translation helpful? Give feedback.
-
look up documentation on how to setup a custom llama model and setup the "TheBloke |
Beta Was this translation helpful? Give feedback.
-
Hi all 👋
I see CUDA support was added a few months back. Super exciting!
Is there anything preventing me from using multiple GPUs as was enabled in llama.cpp by this PR? I deploy LocalAI by Docker (currently CPU-only), would anything special need to change in the example GPU Docker Compose in the docs for multiple GPUs other than appending the second card to the device reservations?
Beta Was this translation helpful? Give feedback.
All reactions