Orca 2 on Colab
-
Updated
Feb 20, 2024 - Jupyter Notebook
Orca 2 on Colab
A web app that allows you to select a subject and then change its background, OR keep the background and change the subject.
🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
Open source RAG with Llama Index for Japanese LLM in low resource settting
Conversation AI model for open domain dialogs
provide specialized immigration assistance in the field of immigration law using large language model
This model is a fine-tuned model based on the "TinyPixel/Llama-2-7B-bf16-sharded" model and "timdettmers/openassistant-guanaco" dataset
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 3B parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.
LLM-Lora-PEFT_accumulate explores optimizations for Large Language Models (LLMs) using PEFT, LORA, and QLORA. Contribute experiments and implementations to enhance LLM efficiency. Join discussions and push the boundaries of LLM optimization. Let's make LLMs more efficient together!
Add a description, image, and links to the bitsandbytes topic page so that developers can more easily learn about it.
To associate your repository with the bitsandbytes topic, visit your repo's landing page and select "manage topics."