Stay organized with collections Save and categorize content based on your preferences.
Preview
This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions.
This page explains the key concepts and features of Google Kubernetes Engine (GKE) Inference Gateway, an extension to the GKE Gateway for optimized serving of generative AI applications.
This page assumes that you know about the following:
This page is intended for the following personas:
GKE Inference Gateway is an extension to the GKE Gateway that provides optimized routing and load balancing for serving generative Artificial Intelligence (AI) workloads. It simplifies the deployment, management, and observability of AI inference workloads.
To choose the optimal load balancing strategy for your AI/ML workloads, see Choose a load balancing strategy for AI inference on GKE.
Features and benefitsGKE Inference Gateway provides the following key capabilities to efficiently serve generative AI models for generative AI applications on GKE:
KV cache hits
: the number of successful lookups in the key-value (KV) cache.KV cache hits
and the queue length of pending requests
to consume accelerators (such as GPUs and TPUs) more efficiently for generative AI workloads.OpenAI API
specifications within your GKE cluster. You can define Gateway routing policies, such as traffic splitting and request mirroring, to manage different model versions and simplify model rollouts. For example, you can route requests for a specific model name to different InferencePool
objects, each serving a different version of the model.Criticality
: lets you specify the serving Criticality
of AI models. Prioritize latency-sensitive requests over latency-tolerant batch inference jobs. For example, you can prioritize requests from latency-sensitive applications and drop less time-sensitive tasks when resources are constrained.GKE Inference Gateway enhances the existing GKE Gateway that uses GatewayClass
objects. GKE Inference Gateway introduces the following new Gateway API Custom Resource Definitions (CRDs), aligned with the OSS Kubernetes Gateway API extension for Inference:
InferencePool
object: represents a group of Pods (containers) that share the same compute configuration, accelerator type, base language model, and model server. This logically groups and manages your AI model serving resources. A single InferencePool
object can span multiple Pods across different GKE nodes and provides scalability and high availability.InferenceModel
object: specifies the serving model's name from the InferencePool
according to the OpenAI API
specification. The InferenceModel
object also specifies the model's serving properties, such as the AI model's Criticality
. GKE Inference Gateway gives preference to workloads classified as Critical
. This lets you multiplex latency-critical and latency-tolerant AI workloads on a GKE cluster. You can also configure the InferenceModel
object to serve LoRA fine-tuned models.TargetModel
object: specifies the target model name and the InferencePool
object that serves the model. This lets you define Gateway routing policies, such as traffic splitting and request mirroring, and simplify model version rollouts.The following diagram illustrates GKE Inference Gateway and its integration with AI safety, observability, and model serving within a GKE cluster.
Figure: GKE Inference Gateway resource modelThe following diagram illustrates the resource model that focuses on two new inference-focused personas and the resources they manage.
Figure: GKE Inference Gateway resource model How GKE Inference Gateway worksGKE Inference Gateway uses Gateway API extensions and model-specific routing logic to handle client requests to an AI model. The following steps describe the request flow.
How the request flow worksGKE Inference Gateway routes client requests from the initial request to a model instance. This section describes how GKE Inference Gateway handles requests. This request flow is common for all clients.
HTTPRoute
object. Request body routing is similar to routing based on the URL path. The difference is that request body routing uses data from the request body.InferencePool
. It tracks the key-value cache (KV-cache) utilization, queue length of pending requests, and active LoRA adapters on each model server. It then routes the request to the optimal model replica based on these metrics to minimize latency and maximize throughput for AI inference.The following diagram illustrates the request flow from a client to a model instance through GKE Inference Gateway.
Figure: GKE Inference Gateway request flow How traffic distribution worksGKE Inference Gateway dynamically distributes inference requests to model servers within the InferencePool
object. This helps optimize resource utilization and maintains performance under varying load conditions. GKE Inference Gateway uses the following two mechanisms to manage traffic distribution:
Endpoint picking: dynamically selects the most suitable model server to handle an inference request. It monitors server load and availability, then makes routing decisions.
Queueing and shedding: manages request flow and prevents traffic overload. GKE Inference Gateway stores incoming requests in a queue, prioritizes requests based on defined criteria, and drops requests when the system is overloaded.
GKE Inference Gateway supports the following Criticality
levels:
Critical
: these workloads are prioritized. The system ensures these requests are served even under resource constraints.Standard
: these workloads are served when resources are available. If resources are limited, these requests are dropped.Sheddable
: these workloads are served opportunistically. If resources are scarce, these requests are dropped to protect Critical
workloads.When the system is under resource pressure, Standard
and Sheddable
requests are immediately dropped with a 429
error code to safeguard Critical
workloads.
GKE Inference Gateway supports streaming inference for applications like chatbots and live translation that require continuous or near-real-time updates. Streaming inference delivers responses in incremental chunks or segments, rather than as a single, complete output. If an error occurs during a streaming response, the stream terminates, and the client receives an error message. GKE Inference Gateway does not retry streaming responses.
Explore application examplesThis section provides examples to address various generative AI application scenarios by using GKE Inference Gateway.
Example 1: Serve multiple generative AI models on a GKE clusterA company wants to deploy multiple large language models (LLMs) to serve different workloads. For example, they might want to deploy a Gemma3
model for a chatbot interface and a Deepseek
model for a recommendation application. The company needs to ensure optimal serving performance for these LLMs.
Using GKE Inference Gateway, you can deploy these LLMs on your GKE cluster with your chosen accelerator configuration in an InferencePool
. You can then route requests based on the model name (such as chatbot
and recommender
) and the Criticality
property.
The following diagram illustrates how GKE Inference Gateway routes requests to different models based on the model name and Criticality
.
A company wants to serve LLMs for document analysis and focuses on audiences in multiple languages, such as English and Spanish. They have fine-tuned models for each language, but need to efficiently use their GPU and TPU capacity. You can use GKE Inference Gateway to deploy dynamic LoRA fine-tuned adapters for each language (for example, english-bot
and spanish-bot
) on a common base model (for example, llm-base
) and accelerator. This lets you reduce the number of required accelerators by densely packing multiple models on a common accelerator.
The following diagram illustrates how GKE Inference Gateway serves multiple LoRA adapters on a shared accelerator.
Figure: Serving LoRA adapters on a shared accelerator What's nextExcept as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-13 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-13 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4