Skip to content

Commit

Permalink
Merge pull request #163 from MicrosoftDocs/main
Browse files Browse the repository at this point in the history
9/6 11:00 AM IST Publish
  • Loading branch information
Saisang authored Sep 6, 2024
2 parents 214d392 + 6963d41 commit 2a1509e
Show file tree
Hide file tree
Showing 10 changed files with 32 additions and 24 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ author: mrbullwinkle
ms.author: mbullwin
ms.service: azure-ai-openai
ms.topic: conceptual
ms.date: 02/16/2024
ms.date: 09/05/2024
manager: nitinme
keywords: ChatGPT, GPT-4, prompt engineering, meta prompts, chain of thought
zone_pivot_groups: openai-prompt
Expand Down
3 changes: 2 additions & 1 deletion articles/ai-services/openai/concepts/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -313,8 +313,9 @@ These models can only be used with Embedding API requests.
| `gpt-35-turbo` (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 16,385 | Sep 2021 |
| `gpt-4` (0613) <sup>**1**</sup> | North Central US <br> Sweden Central | 8192 | Sep 2021 |
| `gpt-4o-mini` <sup>**1**</sup> (2024-07-18) | North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |
| `gpt-4o` <sup>**1**</sup> (2024-08-06) | North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |

**<sup>1</sup>** GPT-4 and GPT-4o mini fine-tuning is currently in public preview. See our [GPT-4 & GPT-4o mini fine-tuning safety evaluation guidance](/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython-new&pivots=programming-language-python#safety-evaluation-gpt-4-fine-tuning---public-preview) for more information.
**<sup>1</sup>** GPT-4, GPT-4o, and GPT-4o mini fine-tuning is currently in public preview. See our [GPT-4, GPT-4o, & GPT-4o mini fine-tuning safety evaluation guidance](/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython-new&pivots=programming-language-python#safety-evaluation-gpt-4-fine-tuning---public-preview) for more information.

### Whisper models

Expand Down
5 changes: 1 addition & 4 deletions articles/ai-services/openai/how-to/fine-tuning-functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: Learn how to improve function calling performance with Azure OpenAI
manager: nitinme
ms.service: azure-ai-openai
ms.topic: how-to
ms.date: 02/05/2024
ms.date: 09/05/2024
author: mrbullwinkle
ms.author: mbullwin
---
Expand All @@ -18,9 +18,6 @@ Models that use the chat completions API support [function calling](../how-to/fu
* Get similarly formatted responses even when the full function definition isn't present. (Allowing you to potentially save money on prompt tokens.)
* Get more accurate and consistent outputs.

> [!IMPORTANT]
> The `functions` and `function_call` parameters have been deprecated with the release of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) version of the API. However, the fine-tuning API currently requires use of the legacy parameters.
## Constructing a training file

When constructing a training file of function calling examples, you would take a function definition like this:
Expand Down
2 changes: 1 addition & 1 deletion articles/ai-services/openai/how-to/fine-tuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ manager: nitinme
ms.service: azure-ai-openai
ms.custom: build-2023, build-2023-dataai, devx-track-python
ms.topic: how-to
ms.date: 08/22/2024
ms.date: 09/05/2024
author: mrbullwinkle
ms.author: mbullwin
zone_pivot_groups: openai-fine-tuning-new
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,14 @@ The following models support fine-tuning:
- `gpt-35-turbo` (1106)
- `gpt-35-turbo` (0125)
- `gpt-4` (0613)**<sup>*</sup>**
- `gpt-4o` (2024-08-06)**<sup>*</sup>**
- `gpt-4o-mini` (2024-07-18)**<sup>*</sup>**

**<sup>*</sup>** Fine-tuning for this model is currently in public preview.

Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.

If you plan to use `gpt-4` for fine-tuning, please refer to the [GPT-4 public preview safety evaluation guidance](#safety-evaluation-gpt-4-fine-tuning---public-preview)
Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.

## Review the workflow for Azure AI Studio

Expand Down Expand Up @@ -253,7 +254,7 @@ When each training epoch completes a checkpoint is generated. A checkpoint is a

:::image type="content" source="../media/fine-tuning/checkpoints.png" alt-text="Screenshot of checkpoints UI." lightbox="../media/fine-tuning/checkpoints.png":::

## Safety evaluation GPT-4 fine-tuning - public preview
## Safety evaluation GPT-4, GPT-4o, GPT-4o-mini fine-tuning - public preview

[!INCLUDE [Safety evaluation](../includes/safety-evaluation.md)]

Expand Down
8 changes: 4 additions & 4 deletions articles/ai-services/openai/includes/fine-tuning-python.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,12 @@ The following models support fine-tuning:
- `gpt-35-turbo` (1106)
- `gpt-35-turbo` (0125)
- `gpt-4` (0613)**<sup>*</sup>**
- `gpt-4o` (2024-08-06)**<sup>*</sup>**
- `gpt-4o-mini` (2024-07-18)**<sup>*</sup>**

**<sup>*</sup>** Fine-tuning for this model is currently in public preview.

If you plan to use `gpt-4` for fine-tuning, please refer to the [GPT-4 public preview safety evaluation guidance](#safety-evaluation-gpt-4-fine-tuning---public-preview)

Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
Or you can fine tune a previously fine-tuned model, formatted as `base-model.ft-{jobid}`.

:::image type="content" source="../media/fine-tuning/models.png" alt-text="Screenshot of model options with a custom fine-tuned model." lightbox="../media/fine-tuning/models.png":::

Expand Down Expand Up @@ -287,6 +286,7 @@ The current supported hyperparameters for fine-tuning are:
|`batch_size` |integer | The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. In general, we've found that larger batch sizes tend to work better for larger datasets. The default value as well as the maximum value for this property are specific to a base model. A larger batch size means that model parameters are updated less frequently, but with lower variance. |
| `learning_rate_multiplier` | number | The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pre-training multiplied by this value. Larger learning rates tend to perform better with larger batch sizes. We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results. A smaller learning rate can be useful to avoid overfitting. |
|`n_epochs` | integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
|`seed` | integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |

To set custom hyperparameters with the 1.x version of the OpenAI Python API:

Expand Down Expand Up @@ -374,7 +374,7 @@ This command isn't available in the 0.28.1 OpenAI Python library. Upgrade to the

---

## Safety evaluation GPT-4 fine-tuning - public preview
## Safety evaluation GPT-4, GPT-4o, GPT-4o-mini fine-tuning - public preview

[!INCLUDE [Safety evaluation](../includes/safety-evaluation.md)]

Expand Down
10 changes: 8 additions & 2 deletions articles/ai-services/openai/includes/fine-tuning-rest.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,13 +31,16 @@ The following models support fine-tuning:
- `gpt-35-turbo` (1106)
- `gpt-35-turbo` (0125)
- `gpt-4` (0613)**<sup>*</sup>**
- `gpt-4o` (2024-08-06)**<sup>*</sup>**
- `gpt-4o-mini` (2024-07-18)**<sup>*</sup>**

**<sup>*</sup>** Fine-tuning for this model is currently in public preview.

Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.

Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.

If you plan to use `gpt-4` for fine-tuning, please refer to the [GPT-4 public preview safety evaluation guidance](#safety-evaluation-gpt-4-fine-tuning---public-preview).


## Review the workflow for the REST API

Expand Down Expand Up @@ -153,6 +156,8 @@ You can create a custom model from one of the following available base models:
- `gpt-35-turbo` (1106)
- `gpt-35-turbo` (0125)
- `gpt-4` (0613)
- `gpt-4o` (2024-08-06)
- `gpt-4o-mini` (2023-07-18)

Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.

Expand Down Expand Up @@ -216,6 +221,7 @@ The current supported hyperparameters for fine-tuning are:
|`batch_size` |integer | The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. In general, we've found that larger batch sizes tend to work better for larger datasets. The default value as well as the maximum value for this property are specific to a base model. A larger batch size means that model parameters are updated less frequently, but with lower variance. |
| `learning_rate_multiplier` | number | The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pre-training multiplied by this value. Larger learning rates tend to perform better with larger batch sizes. We recommend experimenting with values in the range 0.02 to 0.2 to see what produces the best results. A smaller learning rate can be useful to avoid overfitting. |
|`n_epochs` | integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
|`seed` | integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |

## Check the status of your customized model

Expand Down Expand Up @@ -248,7 +254,7 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/{fine_tuning_job_id}
-H "api-key: $AZURE_OPENAI_API_KEY"
```

## Safety evaluation GPT-4 fine-tuning - public preview
## Safety evaluation GPT-4, GPT-4o, GPT-4o-mini fine-tuning - public preview

[!INCLUDE [Safety evaluation](../includes/safety-evaluation.md)]

Expand Down
7 changes: 5 additions & 2 deletions articles/ai-services/openai/includes/fine-tuning-studio.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,16 @@ The following models support fine-tuning:
- `gpt-35-turbo` (1106)
- `gpt-35-turbo` (0125)
- `gpt-4` (0613)**<sup>*</sup>**
- `gpt-4o` (2024-08-06)**<sup>*</sup>**
- `gpt-4o-mini` (2024-07-18)**<sup>*</sup>**

**<sup>*</sup>** Fine-tuning for this model is currently in public preview.

Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.


Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.

If you plan to use `gpt-4` for fine-tuning, please refer to the [GPT-4 public preview safety evaluation guidance](#safety-evaluation-gpt-4-fine-tuning---public-preview)

## Review the workflow for Azure OpenAI Studio

Expand Down Expand Up @@ -322,7 +325,7 @@ Here are some of the tasks you can do on the **Models** pane:
When each training epoch completes a checkpoint is generated. A checkpoint is a fully functional version of a model which can both be deployed and used as the target model for subsequent fine-tuning jobs. Checkpoints can be particularly useful, as they can provide a snapshot of your model prior to overfitting having occurred. When a fine-tuning job completes you will have the three most recent versions of the model available to deploy.


## Safety evaluation GPT-4 fine-tuning - public preview
## Safety evaluation GPT-4, GPT-4o, and GPT-4o-mini fine-tuning - public preview

[!INCLUDE [Safety evaluation](../includes/safety-evaluation.md)]

Expand Down
8 changes: 4 additions & 4 deletions articles/ai-services/openai/includes/safety-evaluation.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: 'Safety evaluation - GPT-4 fine tuning only public preview'
title: 'Safety evaluation - GPT-4o, GPT-4o-mini, & GPT-4 fine tuning only public preview'
titleSuffix: Azure OpenAI
description: Learn about how to perform fine-tuning with GPT-4 models.
description: Learn about how to perform fine-tuning with GPT-4o, GPT-4o-mini, and GPT-4 models.
manager: nitinme
ms.service: azure-ai-openai
ms.topic: include
Expand All @@ -10,13 +10,13 @@ author: mrbullwinkle
ms.author: mbullwin
---

GPT-4 and GPT-4o-mini are our most advanced models that can be fine-tuned to your needs. As with Azure OpenAI models generally, the advanced capabilities of fine-tuned models come with increased responsible AI challenges related to harmful content, manipulation, human-like behavior, privacy issues, and more. Learn more about risks, capabilities, and limitations in the [Overview of Responsible AI practices](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext) and [Transparency Note](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext&tabs=text). To help mitigate the risks associated with GPT-4 and GPT-4o-mini fine-tuned models, we have implemented additional evaluation steps to help detect and prevent harmful content in the training and outputs of fine-tuned models. These steps are grounded in the [Microsoft Responsible AI Standard](https://www.microsoft.com/ai/responsible-ai) and [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new).
GPT-4o, GPT-4o-mini, and GPT-4 are our most advanced models that can be fine-tuned to your needs. As with Azure OpenAI models generally, the advanced capabilities of fine-tuned models come with increased responsible AI challenges related to harmful content, manipulation, human-like behavior, privacy issues, and more. Learn more about risks, capabilities, and limitations in the [Overview of Responsible AI practices](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext) and [Transparency Note](/legal/cognitive-services/openai/transparency-note?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext&tabs=text). To help mitigate the risks associated with advanced fine-tuned models, we have implemented additional evaluation steps to help detect and prevent harmful content in the training and outputs of fine-tuned models. These steps are grounded in the [Microsoft Responsible AI Standard](https://www.microsoft.com/ai/responsible-ai) and [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new).

- Evaluations are conducted in dedicated, customer specific, private workspaces;
- Evaluation endpoints are in the same geography as the Azure OpenAI resource;
- Training data is not stored in connection with performing evaluations; only the final model assessment (deployable or not deployable) is persisted; and

GPT-4 and GPT-4o-mini fine-tuned model evaluation filters are set to predefined thresholds and cannot be modified by customers; they aren't tied to any custom content filtering configuration you may have created.
GPT-4o, GPT-4o-mini, and GPT-4 fine-tuned model evaluation filters are set to predefined thresholds and cannot be modified by customers; they aren't tied to any custom content filtering configuration you may have created.

### Data evaluation

Expand Down
4 changes: 2 additions & 2 deletions articles/ai-services/openai/supported-languages.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ manager: nitinme
ms.service: azure-ai-openai
ms.custom:
ms.topic: conceptual
ms.date: 12/18/2023
ms.date: 09/05/2024
ms.author: mbullwin
---

Expand All @@ -19,7 +19,7 @@ Azure OpenAI supports the following programming languages.

| Language | Source code | Package | Examples |
|------------|---------|-----|-------|
| C# | [Source code](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/openai/Azure.AI.OpenAI/src) | [Package (NuGet)](https://www.nuget.org/packages/Azure.AI.OpenAI/) | [C# examples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/openai/Azure.AI.OpenAI/tests/Samples) |
| C# | [Source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/openai/Azure.AI.OpenAI) | [Package (NuGet)](https://www.nuget.org/packages/Azure.AI.OpenAI/) | [C# examples](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/openai/Azure.AI.OpenAI/tests/Samples) |
| Go | [Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/ai/azopenai) | [Package (Go)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai)| [Go examples](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai#pkg-examples) |
| Java | [Source code](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/openai/azure-ai-openai) | [Artifact (Maven)](https://central.sonatype.com/artifact/com.azure/azure-ai-openai/) | [Java examples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/openai/azure-ai-openai/src/samples) |
| JavaScript | [Source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai) | [Package (npm)](https://www.npmjs.com/package/@azure/openai) | [JavaScript examples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai/samples/) |
Expand Down

0 comments on commit 2a1509e

Please sign in to comment.