Collection of resources for practitioners to play with Large Language Models (LLMs). This list includes tools, software, libraries, datasets, research papers, and other resources related to LLMs fine-tuning.
This repos serves as a collection of resources I've found while learning LLMs. I hope it helps you too!
- May 2024, add Kaggle & Colab Notebooks section.
Source: https://artificialanalysis.ai/models/llama-3-instruct-70b
- Leaderboards
- Open LLM Leaderboard - aims to track, rank and evaluate LLMs and chatbots as they are released.
- Chatbot Arena Leaderboard - a benchmark platform for large language models (LLMs) that features anonymous, randomized battles in a crowdsourced manner.
- AlpacaEval Leaderboard - An Automatic Evaluator for Instruction-following Language Models
- Open Ko-LLM Leaderboard - The Open Ko-LLM Leaderboard objectively evaluates the performance of Korean Large Language Model (LLM).
- Yet Another LLM Leaderboard - Leaderboard made with LLM AutoEval using Nous benchmark suite.
- OpenCompass 2.0 LLM Leaderboard - OpenCompass is an LLM evaluation platform, supporting a wide range of models (InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
- RLHF - Reinforcement Learning from Human feedback
- Alignment in large language models (LLMs) refers to the degree to which the model's behavior aligns with human intentions, values, and goals. Alignment teaches the model the style or format for interacting with users, to expose the knowledge and capabilities that it has already learned during pretraining
- [Stanford] CS224N-Lecture 11: Prompting, Instruction Finetuning, and RLHF Slides
- [UWaterloo] CS 886: Recent Advances on Foundation Models Homepage
- Meta
- LLaMA
- Llama 3: Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications.
- Vicuna-13B, an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- OPT was proposed in Open Pre-trained Transformer Language Models by Meta AI. OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3.
- LLaMA
- Google
- Gemma is a transformer-based large language model developed by Google AI (2B, 7B).
- PaLM is a 540 billion parameter transformer-based large language model developed by Google AI.
- Chinchilla is a family of large language models developed by the research team at DeepMind, presented in March 2022.
- T5 is a large language model developed by Google AI. T5, FLAN-T5, T5-lm-adapt
- Mistral AI
- Adept
- Fuyu, October 17, 2023, Adept AI announced the release of Fuyu, an 8 billion parameter language model.
- Shanghai AI Laboratory
- Phi-3: Phi-3 Mini is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
- Qwen (通义千问) is the large language model family built by Alibaba Cloud.
- PanGu-α - PanGu-α is a 200B parameter autoregressive pretrained Chinese language model develped by Huawei Noah's Ark Lab, MindSpore Team and Peng Cheng Laboratory.
- ULIP-2: Towards Scalable Multimodal Pre-training For 3D Understanding, CVPR 2024. Code
- Point-BERT: Pre-Training 3D Point Cloud Transformers with Masked Point Modeling, CVPR 2022. Code
- Bridging Different Language Models and Generative Vision Models for Text-to-Image Generation, 2024. Project, Code
- DiffusionGPT: LLM-Driven Text-to-Image Generation System, 2024. Project, Code
- Self-correcting LLM-controlled Diffusion Models, CVPR 2024. Project, Code
- LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models, 2023. Code
- ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment, 2024. Project, Code
- 3D-LLM: Injecting the 3D World into Large Language Models, 2023. Project
- Point-bind & point-llm: Aligning point cloud with multi-modality for 3d understanding, generation, and instruction following, Code
- LiDAR-LLM: Exploring the Potential of Large Language Models for 3D LiDAR Understanding
- Mantis: Interleaved Multi-Image Instruction Tuning, 2024. Project, Code
- LIMA: Less Is More for Alignment, NeurIPS 2023. Project
- InstructGPT Training language models to follow instructions with human feedback, Long Ouyang et al. Advances in Neural Information Processing Systems (2022), OpenAI
- DPO Direct Preference Optimization: Your Language Model is Secretly a Reward Model Rafailov et. aI arXiv preprint 2023. arXiv:2305.18290
- Zephyr: Direct Distillation of LM Alignment. Lewis Tunstall et. aI arXiv preprint 2023. arXiv:2310.16944
- Ollama Get up and running with Llama 3, Mistral, Gemma, and other large language models. (67.8k stars)
- LlamaIndex 🦙: A data framework for your LLM applications. (23k stars)
- Petals 🌸: Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading. (7768 stars)
- LLaMA-Factory: An easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM3). (5532 stars)
- lit-gpt: Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. (3469 stars)
- H2O LLM Studio: A framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/ (2880 stars)
- Phoenix: AI Observability & Evaluation - Evaluate, troubleshoot, and fine tune your LLM, CV, and NLP models in a notebook. (1596 stars)
- LLM-Adapters: Code for the EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models". (769 stars)
- Platypus: Code for fine-tuning Platypus fam LLMs using LoRA. (589 stars)
- xtuner: A toolkit for efficiently fine-tuning LLM (InternLM, Llama, Baichuan, QWen, ChatGLM2). (540 stars)
- DB-GPT-Hub: A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance, especially in Text-to-SQL, and achieved higher exec acc than GPT-4 in spider eval with 13B LLM used this project. (422 stars)
- LLM-Finetuning-Hub : Repository that contains LLM fine-tuning and deployment scripts along with our research findings. ⭐ 416
- Finetune_LLMs : Repo for fine-tuning Casual LLMs. ⭐ 391
- MFTCoder : High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs; 业内首个高精度、高效率、多任务、多模型支持、多训练算法,大模型代码能力微调框架. ⭐ 337
- llmware : Providing enterprise-grade LLM-based development framework, tools, and fine-tuned models. ⭐ 289
- LLM-Kit : 🚀WebUI integrated platform for latest LLMs | 各大语言模型的全流程工具 WebUI 整合包。支持主流大模型API接口和开源模型。支持知识库,数据库,角色扮演,mj文生图,LoRA和全参数微调,数据集制作,live2d等全流程应用工具. ⭐ 232
- h2o-wizardlm : Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning. ⭐ 228
- hcgf : Humanable Chat Generative-model Fine-tuning | LLM微调. ⭐ 196
- llm_qlora : Fine-tuning LLMs using QLoRA. ⭐ 136
- awesome-llm-human-preference-datasets : A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval. ⭐ 124
- llm_finetuning : Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes). ⭐ 114
- LLaMA Efficient Tuning 🛠️: Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon).
- H2O LLM Studio 🛠️: Framework and no-code GUI for fine-tuning LLMs.
- PEFT 🛠️: Parameter-Efficient Fine-Tuning (PEFT) methods for efficient adaptation of pre-trained language models to downstream applications.
- ChatGPT-like model 🛠️: Run a fast ChatGPT-like model locally on your device.
- Petals: Run large language models like BLOOM-176B collaboratively, allowing you to load a small part of the model and team up with others for inference or fine-tuning. 🌸
- NVIDIA NeMo: A toolkit for building state-of-the-art conversational AI models and specifically designed for Linux. 🚀
- H2O LLM Studio: A framework and no-code GUI tool for fine-tuning large language models on Windows. 🎛️
- Ludwig AI: A low-code framework for building custom LLMs and other deep neural networks. Easily train state-of-the-art LLMs with a declarative YAML configuration file. 🤖
- bert4torch: An elegant PyTorch implementation of transformers. Load various open-source large model weights for reasoning and fine-tuning. 🔥
- Alpaca.cpp: Run a fast ChatGPT-like model locally on your device. A combination of the LLaMA foundation model and an open reproduction of Stanford Alpaca for instruction-tuned fine-tuning. 🦙
- promptfoo: Evaluate and compare LLM outputs, catch regressions, and improve prompts using automatic evaluations and representative user inputs. 📊