From 5534873ea05b79aa8bffe020092508f58f4372ca Mon Sep 17 00:00:00 2001 From: djliden <7102904+djliden@users.noreply.github.com> Date: Tue, 26 Mar 2024 16:00:16 -0500 Subject: [PATCH] adds start of axolotl notebook --- 5_gemma_2b_axolotl/gemma_2b_axolotl.ipynb | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) create mode 100644 5_gemma_2b_axolotl/gemma_2b_axolotl.ipynb diff --git a/5_gemma_2b_axolotl/gemma_2b_axolotl.ipynb b/5_gemma_2b_axolotl/gemma_2b_axolotl.ipynb new file mode 100644 index 0000000..6c74792 --- /dev/null +++ b/5_gemma_2b_axolotl/gemma_2b_axolotl.ipynb @@ -0,0 +1,22 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Getting Started with Axolotl: Fine-Tuning Gemma 2B\n", + "\n", + "Axolotl is \"a tool designed to streamline the fine-tuning of various AI models.\" It is primarily for training Hugging Face models via full fine-tuning, lora, qlora, relora, gptq. Configurations are specified in yaml files. It supports a variety of different dataset formats. It supports additional libraries such as xformer and flash attention. It is compatible with FSDP and deepspeed for multi-gpu training. It supports logging to MLflow or WandB.\n", + "\n", + "# The recommended workflow is to pick a quickstart notebook from the [examples](https://github.com/OpenAccess-AI-Collective/axolotl/tree/main/examples) directory and modify it as needed.\n" + ] + } + ], + "metadata": { + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}