-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate open source models with PEFT adapters #2
Comments
Hi, I'm working (slowly) on a new version on the dev branch, which supports the open model inference. I previously tested with LLaMA only and haven't had time to compute the results thoroughly. Cheers, |
Hi @terryyz Thanks for the update. I am looking forward to it. Please keep me posted once you have an update. At MonsterAPI we have developed a no-code LLM finetuner and are exploring different ways to do quick evaluation on finetuned adapters. Thanks, |
Hi @gvijqb, No problem! Please let me know if you'd like to collaborate on this project and beyond :) Cheers, |
Sure, would love to explore. Please share how we can collaborate? |
Not sure if MonsterAPI may support some computational resources. I'm a bit short of good GPUs these days 😞 |
I need a way to evaluate a model like this:
https://huggingface.co/qblocks/falcon-7b-python-code-instructions-18k-alpaca
This is a finetuned model on base open source model falcon-7b for code generation. The output is a adapter file using LORA. How can I do this with your tool?
The text was updated successfully, but these errors were encountered: