From 741ebaff145bb73bfc6ac0807d08a1618ed10656 Mon Sep 17 00:00:00 2001 From: Joe Fernandez Date: Fri, 9 Aug 2024 13:53:40 -0700 Subject: [PATCH] Removing "external" link tags (#510) - prospective fix for translation serving failure --- site/en/gemma/docs/lora_tuning.ipynb | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/site/en/gemma/docs/lora_tuning.ipynb b/site/en/gemma/docs/lora_tuning.ipynb index 540263fef..2f6071358 100644 --- a/site/en/gemma/docs/lora_tuning.ipynb +++ b/site/en/gemma/docs/lora_tuning.ipynb @@ -85,9 +85,9 @@ "\n", "LLMs are extremely large in size (parameters in the order of billions). Full fine-tuning (which updates all the parameters in the model) is not required for most applications because typical fine-tuning datasets are relatively much smaller than the pre-training datasets.\n", "\n", - "[Low Rank Adaptation (LoRA)](https://arxiv.org/abs/2106.09685){:.external} is a fine-tuning technique which greatly reduces the number of trainable parameters for downstream tasks by freezing the weights of the model and inserting a smaller number of new weights into the model. This makes training with LoRA much faster and more memory-efficient, and produces smaller model weights (a few hundred MBs), all while maintaining the quality of the model outputs.\n", + "[Low Rank Adaptation (LoRA)](https://arxiv.org/abs/2106.09685) is a fine-tuning technique which greatly reduces the number of trainable parameters for downstream tasks by freezing the weights of the model and inserting a smaller number of new weights into the model. This makes training with LoRA much faster and more memory-efficient, and produces smaller model weights (a few hundred MBs), all while maintaining the quality of the model outputs.\n", "\n", - "This tutorial walks you through using KerasNLP to perform LoRA fine-tuning on a Gemma 2B model using the [Databricks Dolly 15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k){:.external}. This dataset contains 15,000 high-quality human-generated prompt / response pairs specifically designed for fine-tuning LLMs." + "This tutorial walks you through using KerasNLP to perform LoRA fine-tuning on a Gemma 2B model using the [Databricks Dolly 15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k). This dataset contains 15,000 high-quality human-generated prompt / response pairs specifically designed for fine-tuning LLMs." ] }, { @@ -109,7 +109,7 @@ "\n", "To complete this tutorial, you will first need to complete the setup instructions at [Gemma setup](https://ai.google.dev/gemma/docs/setup). The Gemma setup instructions show you how to do the following:\n", "\n", - "* Get access to Gemma on [kaggle.com](https://kaggle.com){:.external}.\n", + "* Get access to Gemma on [kaggle.com](https://kaggle.com).\n", "* Select a Colab runtime with sufficient resources to run\n", " the Gemma 2B model.\n", "* Generate and configure a Kaggle username and API key.\n", @@ -333,7 +333,7 @@ "source": [ "## Load Model\n", "\n", - "KerasNLP provides implementations of many popular [model architectures](https://keras.io/api/keras_nlp/models/){:.external}. In this tutorial, you'll create a model using `GemmaCausalLM`, an end-to-end Gemma model for causal language modeling. A causal language model predicts the next token based on previous tokens.\n", + "KerasNLP provides implementations of many popular [model architectures](https://keras.io/api/keras_nlp/models/). In this tutorial, you'll create a model using `GemmaCausalLM`, an end-to-end Gemma model for causal language modeling. A causal language model predicts the next token based on previous tokens.\n", "\n", "Create the model using the `from_preset` method:" ] @@ -1001,8 +1001,8 @@ "\n", "* Learn how to [generate text with a Gemma model](https://ai.google.dev/gemma/docs/get_started).\n", "* Learn how to perform [distributed fine-tuning and inference on a Gemma model](https://ai.google.dev/gemma/docs/distributed_tuning).\n", - "* Learn how to [use Gemma open models with Vertex AI](https://cloud.google.com/vertex-ai/docs/generative-ai/open-models/use-gemma){:.external}.\n", - "* Learn how to [fine-tune Gemma using KerasNLP and deploy to Vertex AI](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_gemma_kerasnlp_to_vertexai.ipynb){:.external}." + "* Learn how to [use Gemma open models with Vertex AI](https://cloud.google.com/vertex-ai/docs/generative-ai/open-models/use-gemma).\n", + "* Learn how to [fine-tune Gemma using KerasNLP and deploy to Vertex AI](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_gemma_kerasnlp_to_vertexai.ipynb)." ] } ],