"I'd like to see other models." #1916
BradHutchings
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
A note that might be helpful to those doing local installations.
You need to find models that have been converted to GGUF format. That's the newish format supported by llama.cpp. It replaces GGML, which is no longer supported by llama.cpp.
I wanted to try out Apple's OpenELM. It turns out, they don't support GGUF out of the box. There is a lonely github developer working on it.
ggerganov/llama.cpp#6868
All that aside, I have worked out the
settings-local.yaml
file. a small change toscripts/setup
, and the order of operations to use a different GGUF model. I'll post again in the thread when I find something suitable.Beta Was this translation helpful? Give feedback.
All reactions