-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[draft] add llamafile 🦙📁 #866
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @not-lain, copying our message from slack here:
A good idea would be to add llamafile as a LocalApp instead of a library.
(that's also how llama.cpp/ llama-cpp-python) are added as well.
So what I'd recommend is that you open a PR to LocalApps: https://github.com/huggingface/huggingface.js/blob/1de39598a231afab805e252d2b27e0ec56a1897a/packages/tasks/src/local-apps.ts#L142
This way we can add support for both GGUFs + llamafiles on the same PR and people would be able to opt-in to the local app as well. Makes it more use-ful for the people.
Would you be keen on opening a PR for that?
const command = (binary: string) => | ||
[ | ||
"# Load and run the model :", | ||
`wget https://huggingface.co/${model.id}/resolve/main/`.concat(`${filepath ?? "{{GGUF_FILE}}"}`), // could not figure out how to do it without concat |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
`wget https://huggingface.co/${model.id}/resolve/main/`.concat(`${filepath ?? "{{GGUF_FILE}}"}`), // could not figure out how to do it without concat | |
`wget https://huggingface.co/${model.id}/resolve/main/${filepath ?? '{{GGUF_FILE}}'}`, |
Would something like that work? (not sure)
"# Load and run the model :", | ||
`wget https://huggingface.co/${model.id}/resolve/main/`.concat(`${filepath ?? "{{GGUF_FILE}}"}`), // could not figure out how to do it without concat | ||
`chmod +x ${binary}`, | ||
`${binary} -m ${filepath?? "{{GGUF_FILE}}"} -p 'You are a helpful assistant' `, // will this create a second dropdown ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
`${binary} -m ${filepath?? "{{GGUF_FILE}}"} -p 'You are a helpful assistant' `, // will this create a second dropdown ? | |
`${binary} -m ${filepath ?? "{{GGUF_FILE}}"} -p 'You are a helpful assistant'`, // will this create a second dropdown ? |
docsUrl : "https://github.com/Mozilla-Ocho/llamafile", | ||
mainTask : "text-generation", | ||
displayOnModelPage : isLlamaCppGgufModel, // update this later to include .llamafile | ||
snippet: snippetLlamafileGGUF , |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
snippet: snippetLlamafileGGUF , | |
snippet: snippetLlamafileGGUF, |
].join("\n"); | ||
return [ | ||
{ | ||
title: "Use pre-built binary", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this step is correct in llamafile, because the downloaded model file is already a pre-built binary.
The release binary from https://github.com/Mozilla-Ocho/llamafile/releases is used in Creating llamafiles section of the README. So this step should be named Create your own llamafile
(or we could also remove this section, and just leave a link to README)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @pcuenca and @ngxson |
llamafile is a local application for running distributed LLM using a single file, developed by jart in collaboration with the mozilla team.
you can already filter by llamafile under hf.co/models under the
libraries
section, so I thought this is a good time to add the code snippets for the library.the app handles
.gguf
files.llamafile
files 🦙📁This pr tackles only
.gguf
files, leaving.llamafile
for another pr because the current huggingface.js does not handle{{LLAMAFILE_FILE}}
yet (I think)I barely know any JS, so I might need some expert help with this PR.
an example of a code snippet as provided by the maintainer of the library could be used as such :
As for the
.llamafile
snippet I have reported it internally to our chief 🦙 officer @osanseviero in here (private dm)