🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
-
Updated
Jan 17, 2024 - Python
🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
Orca 2 on Colab
A web app that allows you to select a subject and then change its background, OR keep the background and change the subject.
LLM-Lora-PEFT_accumulate explores optimizations for Large Language Models (LLMs) using PEFT, LORA, and QLORA. Contribute experiments and implementations to enhance LLM efficiency. Join discussions and push the boundaries of LLM optimization. Let's make LLMs more efficient together!
Open source RAG with Llama Index for Japanese LLM in low resource settting
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 3B parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
Conversation AI model for open domain dialogs
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
This model is a fine-tuned model based on the "TinyPixel/Llama-2-7B-bf16-sharded" model and "timdettmers/openassistant-guanaco" dataset
provide specialized immigration assistance in the field of immigration law using large language model
Add a description, image, and links to the bitsandbytes topic page so that developers can more easily learn about it.
To associate your repository with the bitsandbytes topic, visit your repo's landing page and select "manage topics."