Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add additional token calculation information to the Limit Azure OpenAI API token usage API Management policy #123618

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion articles/api-management/azure-openai-token-limit-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ For more information, see [Azure OpenAI Service models](../ai-services/openai/co
| -------------- | ----------------------------------------------------------------------------------------------------- | -------- | ------- |
| counter-key | The key to use for the token limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed.| Yes | N/A |
| tokens-per-minute | The maximum number of tokens consumed by prompt and completion per minute. | Yes | N/A |
| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt: <br> - `true`: estimate the number of tokens based on prompt schema in API; may reduce performance. <br> - `false`: don't estimate prompt tokens. | Yes | N/A |
| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt: <br> - `true`: estimate the number of tokens based on prompt schema in API; may reduce performance. <br> - `false`: don't estimate prompt tokens. <br><br> _When setting this to `false`, the remaing tokens per `counter-key` will be calculated using the actual token-usage from the response of the model. This could result in prompts being sent to the model, that exceed the token limit. In such case, this will be detected in the response result in all succeeding requests being blocked by the policy, until the token limit frees up again._ | Yes | N/A |
| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | `Retry-After` |
| retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | N/A |
| remaining-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens allowed for the time interval. Policy expressions aren't allowed.| No | N/A |
Expand Down