A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://learn.microsoft.com/en-us/azure/api-management/azure-openai-token-limit-policy below:

Azure API Management policy reference - azure-openai-token-limit

APPLIES TO: Developer | Basic | Basic v2 | Standard | Standard v2 | Premium | Premium v2

The azure-openai-token-limit policy prevents Azure OpenAI in Foundry Models API usage spikes on a per key basis by limiting consumption of language model tokens to a specified rate (number per minute), a quota over a specified period, or both. When a specified token rate limit is exceeded, the caller receives a 429 Too Many Requests response status code. When a specified quota is exceeded, the caller receives a 403 Forbidden response status code.

By relying on token usage metrics returned from the OpenAI endpoint, the policy can accurately monitor and enforce limits in real time. The policy also enables precalculation of prompt tokens by API Management, minimizing unnecessary requests to the OpenAI backend if the limit is already exceeded.

Supported Azure OpenAI in Foundry Models models

The policy is used with APIs added to API Management from the Azure OpenAI in Foundry Models of the following types:

API type Supported models Chat completion gpt-3.5

gpt-4

gpt-4o

gpt-4o-mini

o1

o3

Embeddings text-embedding-3-large

text-embedding-3-small

text-embedding-ada-002

Responses (preview) gpt-4o (Versions: 2024-11-20, 2024-08-06, 2024-05-13)

gpt-4o-mini (Version: 2024-07-18)

gpt-4.1 (Version: 2025-04-14)

gpt-4.1-nano (Version: 2025-04-14)

gpt-4.1-mini (Version: 2025-04-14)

gpt-image-1 (Version: 2025-04-15)

o3 (Version: 2025-04-16)

o4-mini (Version: `2025-04-16)

Note

Traditional completion APIs are only available with legacy model versions and support is limited.

For current information about the models and their capabilities, see Azure OpenAI in Foundry Models.

Policy statement
<azure-openai-token-limit counter-key="key value"
        tokens-per-minute="number"
        token-quota="number"
        token-quota-period="Hourly | Daily | Weekly | Monthly | Yearly"
        estimate-prompt-tokens="true | false"    
        retry-after-header-name="custom header name, replaces default 'Retry-After'" 
        retry-after-variable-name="policy expression variable name"
        remaining-quota-tokens-header-name="header name"  
        remaining-quota-tokens-variable-name="policy expression variable name"
        remaining-tokens-header-name="header name"  
        remaining-tokens-variable-name="policy expression variable name"
        tokens-consumed-header-name="header name"
        tokens-consumed-variable-name="policy expression variable name" />
Attributes Attribute Description Required Default counter-key The key to use for the token limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed. Yes N/A tokens-per-minute The maximum number of tokens consumed by prompt and completion per minute. Either a rate limit (tokens-per-minute), a quota (token-quota over a token-quota-period), or both must be specified. N/A token-quota The maximum number of tokens allowed during the time interval specified in the token-quota-period. Policy expressions aren't allowed. Either a rate limit (tokens-per-minute), a quota (token-quota over a token-quota-period), or both must be specified. N/A token-quota-period The length of the fixed window after which the token-quota resets. The value must be one of the following: Hourly,Daily, Weekly, Monthly, Yearly. The start time of a quota period is calculated using the UTC timestamp truncated to the unit (hour, day, etc.) used for the period. Either a rate limit (tokens-per-minute), a quota (token-quota over a token-quota-period), or both must be specified. N/A estimate-prompt-tokens Boolean value that determines whether to estimate the number of tokens required for a prompt:
- true: estimate the number of tokens based on prompt schema in API; may reduce performance.
- false: don't estimate prompt tokens.

When set to false, the remaining tokens per counter-key are calculated using the actual token usage from the response of the model. This could result in prompts being sent to the model that exceed the token limit. In such case, this will be detected in the response, and all succeeding requests will be blocked by the policy until the token limit frees up again.

Yes N/A retry-after-header-name The name of a custom response header whose value is the recommended retry interval in seconds after the specified tokens-per-minute or token-quota is exceeded. Policy expressions aren't allowed. No Retry-After retry-after-variable-name The name of a variable that stores the recommended retry interval in seconds after the specified tokens-per-minute or token-quota is exceeded. Policy expressions aren't allowed. No N/A remaining-quota-tokens-header-name The name of a response header whose value after each policy execution is the number of remaining tokens corresponding to token-quota allowed for the token-quota-period. Policy expressions aren't allowed. No N/A remaining-quota-tokens-variable-name The name of a variable that after each policy execution stores the number of remaining tokens corresponding to token-quota allowed for the token-quota-period. Policy expressions aren't allowed. No N/A remaining-tokens-header-name The name of a response header whose value after each policy execution is the number of remaining tokens corresponding to tokens-per-minute allowed for the time interval. Policy expressions aren't allowed. No N/A remaining-tokens-variable-name The name of a variable that after each policy execution stores the number of remaining tokens corresponding to tokens-per-minute allowed for the time interval. Policy expressions aren't allowed. No N/A tokens-consumed-header-name The name of a response header whose value is the number of tokens consumed by both prompt and completion. The header is added to response only after the response is received from backend. Policy expressions aren't allowed. No N/A tokens-consumed-variable-name The name of a variable initialized to the estimated number of tokens in the prompt in backend section of pipeline if estimate-prompt-tokens is true and zero otherwise. The variable is updated with the reported count upon receiving the response in outbound section. No N/A Usage Usage notes Examples Token rate limit

In the following example, the token rate limit of 5000 per minute is keyed by the caller IP address. The policy doesn't estimate the number of tokens required for a prompt. After each policy execution, the remaining tokens allowed for that caller IP address in the time period are stored in the variable remainingTokens.

<policies>
    <inbound>
        <base />
        <azure-openai-token-limit
            counter-key="@(context.Request.IpAddress)"
            tokens-per-minute="5000" estimate-prompt-tokens="false" remaining-tokens-variable-name="remainingTokens" />
    </inbound>
    <outbound>
        <base />
    </outbound>
</policies>
Token quota

In the following example, the token quota of 10000 is keyed by the subscription ID and resets monthly. After each policy execution, the number of remaining tokens allowed for that subscription ID in the time period is stored in the variable remainingQuotaTokens.

<policies>
    <inbound>
        <base />
        <azure-openai-token-limit
            counter-key="@(context.Subscription.Id)"
            token-quota="100000" token-quota-period="Monthly" remaining-quota-tokens-variable-name="remainingQuotaTokens" />
    </inbound>
    <outbound>
        <base />
    </outbound>
</policies>

For more information about working with policies, see:


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4