From 4c9b755230ffaeca34423f3af2ce85ed27b49627 Mon Sep 17 00:00:00 2001 From: Robin-Manuel Thiel Date: Wed, 3 Jul 2024 14:52:45 +0200 Subject: [PATCH 1/2] Add additional token calculation information to the Limit Azure OpenAI API token usage API Management policy MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit How does the new APIM policy “Limit Azure OpenAI API token usage” `estimate-prompt-tokens` feature work? How does it estimate the tokens? Only of the prompt or also of the response? What changes, when I activate/deactivate it? The documentation was way too minimal on this. This change adds some additional information about the impact of setting `estimate-prompt-tokens` to false. This is important, because it might cause the probably unexpected behavior of sending requests to the LLM backend, which should have been filtered out by the policy. Please check for correctness by the Product Team. --- articles/api-management/azure-openai-token-limit-policy.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/articles/api-management/azure-openai-token-limit-policy.md b/articles/api-management/azure-openai-token-limit-policy.md index 58bce798ff52c..bef9ccf973c91 100644 --- a/articles/api-management/azure-openai-token-limit-policy.md +++ b/articles/api-management/azure-openai-token-limit-policy.md @@ -54,7 +54,7 @@ For more information, see [Azure OpenAI Service models](../ai-services/openai/co | -------------- | ----------------------------------------------------------------------------------------------------- | -------- | ------- | | counter-key | The key to use for the token limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed.| Yes | N/A | | tokens-per-minute | The maximum number of tokens consumed by prompt and completion per minute. | Yes | N/A | -| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt:
- `true`: estimate the number of tokens based on prompt schema in API; may reduce performance.
- `false`: don't estimate prompt tokens. | Yes | N/A | +| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt:
- `true`: estimate the number of tokens based on prompt schema in API; may reduce performance.
- `false`: don't estimate prompt tokens.

_When setting this to `false`, the remaing tokens per `counter-key` will be calculated using the actual token-usage from the response of the model. This could result in prompts being sent to the model, that exceed the token limit. In such case, this will be detected in the response result in all succeeding requests being blocked by the policy, until the token limit frees up again._ | Yes | N/A | | retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | `Retry-After` | | retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | N/A | | remaining-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens allowed for the time interval. Policy expressions aren't allowed.| No | N/A | From b218d1827dac2e42945c34af5fc48b421efdaa90 Mon Sep 17 00:00:00 2001 From: Courtney Wales <62625502+Court72@users.noreply.github.com> Date: Wed, 31 Jul 2024 08:51:14 -0600 Subject: [PATCH 2/2] apply suggestions from PR review Co-authored-by: Dan Lepow --- articles/api-management/azure-openai-token-limit-policy.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/articles/api-management/azure-openai-token-limit-policy.md b/articles/api-management/azure-openai-token-limit-policy.md index bef9ccf973c91..71b01b654dfaa 100644 --- a/articles/api-management/azure-openai-token-limit-policy.md +++ b/articles/api-management/azure-openai-token-limit-policy.md @@ -54,7 +54,7 @@ For more information, see [Azure OpenAI Service models](../ai-services/openai/co | -------------- | ----------------------------------------------------------------------------------------------------- | -------- | ------- | | counter-key | The key to use for the token limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed.| Yes | N/A | | tokens-per-minute | The maximum number of tokens consumed by prompt and completion per minute. | Yes | N/A | -| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt:
- `true`: estimate the number of tokens based on prompt schema in API; may reduce performance.
- `false`: don't estimate prompt tokens.

_When setting this to `false`, the remaing tokens per `counter-key` will be calculated using the actual token-usage from the response of the model. This could result in prompts being sent to the model, that exceed the token limit. In such case, this will be detected in the response result in all succeeding requests being blocked by the policy, until the token limit frees up again._ | Yes | N/A | +| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt:
- `true`: estimate the number of tokens based on prompt schema in API; may reduce performance.
- `false`: don't estimate prompt tokens.

When set to `false`, the remaining tokens per `counter-key` are calculated using the actual token usage from the response of the model. This could result in prompts being sent to the model that exceed the token limit. In such case, this will be detected in the response, and all succeeding requests will be blocked by the policy until the token limit frees up again. | Yes | N/A | | retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | `Retry-After` | | retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | N/A | | remaining-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens allowed for the time interval. Policy expressions aren't allowed.| No | N/A |