To deploy the Llama3.1
model to GKE, sign the license consent agreement and generate a Hugging Face access token.
You must sign the consent agreement to use the Llama3.1
model. Follow these instructions:
To access the model through Hugging Face, you need a Hugging Face token.
Follow these steps to generate a new token if you don't have one already:
Read
.In this tutorial, you use Cloud Shell to manage resources hosted on Google Cloud. Cloud Shell comes preinstalled with the software you need for this tutorial, including kubectl
and gcloud CLI.
To set up your environment with Cloud Shell, perform the following steps:
In the Google Cloud console, launch a Cloud Shell session by clicking Activate Cloud Shell in the Google Cloud console. This launches a session in the bottom pane of Google Cloud console.
Set the default environment variables:
gcloud config set project PROJECT_ID
gcloud config set billing/quota_project PROJECT_ID
export PROJECT_ID=$(gcloud config get project)
export REGION=REGION
export CLUSTER_NAME=CLUSTER_NAME
export HF_TOKEN=HF_TOKEN
Replace the following values:
PROJECT_ID
: your Google Cloud project ID.REGION
: a region that supports the accelerator type you want to use, for example, us-central1
for H100 GPU.CLUSTER_NAME
: the name of your cluster.HF_TOKEN
: the Hugging Face token you generated earlier.To create the required resources, use these instructions.
Note: You might need to create a capacity reservation to use some accelerators. For more information about reserving and consuming reserved resources, see Consuming reserved zonal resources. Create a GKE cluster and node poolServe LLMs on GPUs in a GKE Autopilot or Standard cluster. We recommend that you use a Autopilot cluster for a fully managed Kubernetes experience. To choose the GKE mode of operation that's the best fit for your workloads, see Choose a GKE mode of operation.
AutopilotIn Cloud Shell, run the following command:
gcloud container clusters create-auto CLUSTER_NAME \
--project=PROJECT_ID \
--location=CONTROL_PLANE_LOCATION \
--release-channel=rapid
Replace the following values:
PROJECT_ID
: your Google Cloud project ID.CONTROL_PLANE_LOCATION
: the Compute Engine region of the control plane of your cluster. Provide a region that supports the accelerator type you want to use, for example, us-central1
for H100 GPU.CLUSTER_NAME
: the name of your cluster.GKE creates an Autopilot cluster with CPU and GPU nodes as requested by the deployed workloads.
StandardIn Cloud Shell, run the following command to create a Standard cluster:
gcloud container clusters create CLUSTER_NAME \
--project=PROJECT_ID \
--location=CONTROL_PLANE_LOCATION \
--workload-pool=PROJECT_ID.svc.id.goog \
--release-channel=rapid \
--num-nodes=1 \
--enable-managed-prometheus \
--monitoring=SYSTEM,DCGM \
--gateway-api=standard
Replace the following values:
PROJECT_ID
: your Google Cloud project ID.CONTROL_PLANE_LOCATION
: the Compute Engine region of the control plane of your cluster. Provide a region that supports the accelerator type you want to use, for example, us-central1
for H100 GPU.CLUSTER_NAME
: the name of your cluster.The cluster creation might take several minutes.
To create a node pool with the appropriate disk size for running Llama-3.1-8B-Instruct
model, run the following command:
gcloud container node-pools create gpupool \
--accelerator type=nvidia-h100-80gb,count=2,gpu-driver-version=latest \
--project=PROJECT_ID \
--location=CONTROL_PLANE_LOCATION \
--node-locations=CONTROL_PLANE_LOCATION-a \
--cluster=CLUSTER_NAME \
--machine-type=a3-highgpu-2g \
--num-nodes=1 \
--disk-type="pd-standard"
GKE creates a single node pool containing a H100 GPU.
To set up authorization to scrape metrics, create the inference-gateway-sa-metrics-reader-secret
secret:
kubectl apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: inference-gateway-metrics-reader
rules:
- nonResourceURLs:
- /metrics
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: inference-gateway-sa-metrics-reader
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: inference-gateway-sa-metrics-reader-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: inference-gateway-sa-metrics-reader
namespace: default
roleRef:
kind: ClusterRole
name: inference-gateway-metrics-reader
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: inference-gateway-sa-metrics-reader-secret
namespace: default
annotations:
kubernetes.io/service-account.name: inference-gateway-sa-metrics-reader
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: inference-gateway-sa-metrics-reader-secret-read
rules:
- resources:
- secrets
apiGroups: [""]
verbs: ["get", "list", "watch"]
resourceNames: ["inference-gateway-sa-metrics-reader-secret"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gmp-system:collector:inference-gateway-sa-metrics-reader-secret-read
namespace: default
roleRef:
name: inference-gateway-sa-metrics-reader-secret-read
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- name: collector
namespace: gmp-system
kind: ServiceAccount
EOF
In Cloud Shell, do the following:
To communicate with your cluster, configure kubectl
:
gcloud container clusters get-credentials CLUSTER_NAME \
--location=CONTROL_PLANE_LOCATION
Replace the following values:
CONTROL_PLANE_LOCATION
: the Compute Engine region of the control plane of your cluster.CLUSTER_NAME
: the name of your cluster.Create a Kubernetes Secret that contains the Hugging Face token:
kubectl create secret generic HF_SECRET \
--from-literal=token=HF_TOKEN \
--dry-run=client -o yaml | kubectl apply -f -
Replace the following:
HF_TOKEN
: the Hugging Face token you generated earlier.HF_SECRET
: the name for your Kubernetes secret. For example, hf-secret
.InferenceModel
and InferencePool
CRDs
In this section, you install the necessary Custom Resource Definitions (CRDs) for GKE Inference Gateway.
CRDs extend the Kubernetes API. This lets you define new resource types. To use GKE Inference Gateway, install the InferencePool
and InferenceModel
CRDs in your GKE cluster by running the following command:
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/releases/download/v0.3.0/manifests.yaml
Deploy the model server
This example deploys a Llama3.1
model using a vLLM model server. The deployment is labeled as app:vllm-llama3-8b-instruct
. This deployment also uses two LoRA adapters named food-review
and cad-fabricator
from Hugging Face. You can update this deployment with your own model server and model container, serving port, and deployment name. You can optionally configure LoRA adapters in the deployment, or deploy the base model.
To deploy on a nvidia-h100-80gb
accelerator type, save the following manifest as vllm-llama3-8b-instruct.yaml
. This manifest defines a Kubernetes Deployment with your model and model server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: vllm-llama3-8b-instruct
spec:
replicas: 3
selector:
matchLabels:
app: vllm-llama3-8b-instruct
template:
metadata:
labels:
app: vllm-llama3-8b-instruct
spec:
containers:
- name: vllm
image: "vllm/vllm-openai:latest"
imagePullPolicy: Always
command: ["python3", "-m", "vllm.entrypoints.openai.api_server"]
args:
- "--model"
- "meta-llama/Llama-3.1-8B-Instruct"
- "--tensor-parallel-size"
- "1"
- "--port"
- "8000"
- "--enable-lora"
- "--max-loras"
- "2"
- "--max-cpu-loras"
- "12"
env:
# Enabling LoRA support temporarily disables automatic v1, we want to force it on
# until 0.8.3 vLLM is released.
- name: VLLM_USE_V1
value: "1"
- name: PORT
value: "8000"
- name: HUGGING_FACE_HUB_TOKEN
valueFrom:
secretKeyRef:
name: hf-token
key: token
- name: VLLM_ALLOW_RUNTIME_LORA_UPDATING
value: "true"
ports:
- containerPort: 8000
name: http
protocol: TCP
lifecycle:
preStop:
# vLLM stops accepting connections when it receives SIGTERM, so we need to sleep
# to give upstream gateways a chance to take us out of rotation. The time we wait
# is dependent on the time it takes for all upstreams to completely remove us from
# rotation. Older or simpler load balancers might take upwards of 30s, but we expect
# our deployment to run behind a modern gateway like Envoy which is designed to
# probe for readiness aggressively.
sleep:
# Upstream gateway probers for health should be set on a low period, such as 5s,
# and the shorter we can tighten that bound the faster that we release
# accelerators during controlled shutdowns. However, we should expect variance,
# as load balancers may have internal delays, and we don't want to drop requests
# normally, so we're often aiming to set this value to a p99 propagation latency
# of readiness -> load balancer taking backend out of rotation, not the average.
#
# This value is generally stable and must often be experimentally determined on
# for a given load balancer and health check period. We set the value here to
# the highest value we observe on a supported load balancer, and we recommend
# tuning this value down and verifying no requests are dropped.
#
# If this value is updated, be sure to update terminationGracePeriodSeconds.
#
seconds: 30
#
# IMPORTANT: preStop.sleep is beta as of Kubernetes 1.30 - for older versions
# replace with this exec action.
#exec:
# command:
# - /usr/bin/sleep
# - 30
livenessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
# vLLM's health check is simple, so we can more aggressively probe it. Liveness
# check endpoints should always be suitable for aggressive probing.
periodSeconds: 1
successThreshold: 1
# vLLM has a very simple health implementation, which means that any failure is
# likely significant. However, any liveness triggered restart requires the very
# large core model to be reloaded, and so we should bias towards ensuring the
# server is definitely unhealthy vs immediately restarting. Use 5 attempts as
# evidence of a serious problem.
failureThreshold: 5
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
# vLLM's health check is simple, so we can more aggressively probe it. Readiness
# check endpoints should always be suitable for aggressive probing, but may be
# slightly more expensive than readiness probes.
periodSeconds: 1
successThreshold: 1
# vLLM has a very simple health implementation, which means that any failure is
# likely significant,
failureThreshold: 1
timeoutSeconds: 1
# We set a startup probe so that we don't begin directing traffic or checking
# liveness to this instance until the model is loaded.
startupProbe:
# Failure threshold is when we believe startup will not happen at all, and is set
# to the maximum possible time we believe loading a model will take. In our
# default configuration we are downloading a model from HuggingFace, which may
# take a long time, then the model must load into the accelerator. We choose
# 10 minutes as a reasonable maximum startup time before giving up and attempting
# to restart the pod.
#
# IMPORTANT: If the core model takes more than 10 minutes to load, pods will crash
# loop forever. Be sure to set this appropriately.
failureThreshold: 3600
# Set delay to start low so that if the base model changes to something smaller
# or an optimization is deployed, we don't wait unnecessarily.
initialDelaySeconds: 2
# As a startup probe, this stops running and so we can more aggressively probe
# even a moderately complex startup - this is a very important workload.
periodSeconds: 1
httpGet:
# vLLM does not start the OpenAI server (and hence make /health available)
# until models are loaded. This may not be true for all model servers.
path: /health
port: http
scheme: HTTP
resources:
limits:
nvidia.com/gpu: 1
requests:
nvidia.com/gpu: 1
volumeMounts:
- mountPath: /data
name: data
- mountPath: /dev/shm
name: shm
- name: adapters
mountPath: "/adapters"
initContainers:
- name: lora-adapter-syncer
tty: true
stdin: true
image: us-central1-docker.pkg.dev/k8s-staging-images/gateway-api-inference-extension/lora-syncer:main
restartPolicy: Always
imagePullPolicy: Always
env:
- name: DYNAMIC_LORA_ROLLOUT_CONFIG
value: "/config/configmap.yaml"
volumeMounts: # DO NOT USE subPath, dynamic configmap updates don't work on subPaths
- name: config-volume
mountPath: /config
restartPolicy: Always
# vLLM allows VLLM_PORT to be specified as an environment variable, but a user might
# create a 'vllm' service in their namespace. That auto-injects VLLM_PORT in docker
# compatible form as `tcp://<IP>:<PORT>` instead of the numeric value vLLM accepts
# causing CrashLoopBackoff. Set service environment injection off by default.
enableServiceLinks: false
# Generally, the termination grace period needs to last longer than the slowest request
# we expect to serve plus any extra time spent waiting for load balancers to take the
# model server out of rotation.
#
# An easy starting point is the p99 or max request latency measured for your workload,
# although LLM request latencies vary significantly if clients send longer inputs or
# trigger longer outputs. Since steady state p99 will be higher than the latency
# to drain a server, you may wish to slightly this value either experimentally or
# via the calculation below.
#
# For most models you can derive an upper bound for the maximum drain latency as
# follows:
#
# 1. Identify the maximum context length the model was trained on, or the maximum
# allowed length of output tokens configured on vLLM (llama2-7b was trained to
# 4k context length, while llama3-8b was trained to 128k).
# 2. Output tokens are the more compute intensive to calculate and the accelerator
# will have a maximum concurrency (batch size) - the time per output token at
# maximum batch with no prompt tokens being processed is the slowest an output
# token can be generated (for this model it would be about 100ms TPOT at a max
# batch size around 50)
# 3. Calculate the worst case request duration if a request starts immediately
# before the server stops accepting new connections - generally when it receives
# SIGTERM (for this model that is about 4096 / 10 ~ 40s)
# 4. If there are any requests generating prompt tokens that will delay when those
# output tokens start, and prompt token generation is roughly 6x faster than
# compute-bound output token generation, so add 20% to the time from above (40s +
# 16s ~ 55s)
#
# Thus we think it will take us at worst about 55s to complete the longest possible
# request the model is likely to receive at maximum concurrency (highest latency)
# once requests stop being sent.
#
# NOTE: This number will be lower than steady state p99 latency since we stop receiving
# new requests which require continuous prompt token computation.
# NOTE: The max timeout for backend connections from gateway to model servers should
# be configured based on steady state p99 latency, not drain p99 latency
#
# 5. Add the time the pod takes in its preStop hook to allow the load balancers have
# stopped sending us new requests (55s + 30s ~ 85s)
#
# Because the termination grace period controls when the Kubelet forcibly terminates a
# stuck or hung process (a possibility due to a GPU crash), there is operational safety
# in keeping the value roughly proportional to the time to finish serving. There is also
# value in adding a bit of extra time to deal with unexpectedly long workloads.
#
# 6. Add a 50% safety buffer to this time since the operational impact should be low
# (85s * 1.5 ~ 130s)
#
# One additional source of drain latency is that some workloads may run close to
# saturation and have queued requests on each server. Since traffic in excess of the
# max sustainable QPS will result in timeouts as the queues grow, we assume that failure
# to drain in time due to excess queues at the time of shutdown is an expected failure
# mode of server overload. If your workload occasionally experiences high queue depths
# due to periodic traffic, consider increasing the safety margin above to account for
# time to drain queued requests.
terminationGracePeriodSeconds: 130
nodeSelector:
cloud.google.com/gke-accelerator: "nvidia-h100-80gb"
volumes:
- name: data
emptyDir: {}
- name: shm
emptyDir:
medium: Memory
- name: adapters
emptyDir: {}
- name: config-volume
configMap:
name: vllm-llama3-8b-adapters
---
apiVersion: v1
kind: ConfigMap
metadata:
name: vllm-llama3-8b-adapters
data:
configmap.yaml: |
vLLMLoRAConfig:
name: vllm-llama3.1-8b-instruct
port: 8000
defaultBaseModel: meta-llama/Llama-3.1-8B-Instruct
ensureExist:
models:
- id: food-review
source: Kawon/llama3.1-food-finetune_v14_r8
- id: cad-fabricator
source: redcathode/fabricator
---
kind: HealthCheckPolicy
apiVersion: networking.gke.io/v1
metadata:
name: health-check-policy
namespace: default
spec:
targetRef:
group: "inference.networking.x-k8s.io"
kind: InferencePool
name: vllm-llama3-8b-instruct
default:
config:
type: HTTP
httpHealthCheck:
requestPath: /health
port: 8000
Apply the manifest to your cluster:
kubectl apply -f vllm-llama3-8b-instruct.yaml
InferencePool
resource
The InferencePool
Kubernetes custom resource defines a group of Pods with a common base LLM and compute configuration.
The InferencePool
custom resource includes the following key fields:
selector
: specifies which Pods belong to this pool. The labels in this selector must exactly match the labels applied to your model server Pods.targetPort
: defines the ports used by the model server within the Pods.The InferencePool
resource enables GKE Inference Gateway to route traffic to your model server Pods.
InferencePool
resource selects must already be running.
To create an InferencePool
using Helm, perform the following steps:
helm install vllm-llama3-8b-instruct \
--set inferencePool.modelServers.matchLabels.app=vllm-llama3-8b-instruct \
--set provider.name=gke \
--set healthCheckPolicy.create=false \
--version v0.3.0 \
oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool
Change the following field to match your Deployment:
inferencePool.modelServers.matchLabels.app
: the key of the label used to select your model server Pods.This command creates an InferencePool
object that logically represents your model server deployment and references the model endpoint services within the Pods that the Selector
selects.
InferenceModel
resource with a serving criticality
The InferenceModel
Kubernetes custom resource defines a specific model, including LoRA-tuned models, and its serving criticality.
The InferenceModel
custom resource includes the following key fields:
modelName
: specifies the name of the base model or LoRA adapter.Criticality
: specifies the serving criticality of the model.poolRef
: references the InferencePool
that the model is served on.The InferenceModel
enables the GKE Inference Gateway to route traffic to your model server Pods based on the model name and criticality.
To create an InferenceModel
, perform the following steps:
Save the following sample manifest as inferencemodel.yaml
:
apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
name: inferencemodel-sample
spec:
modelName: MODEL_NAME
criticality: CRITICALITY
poolRef:
name: INFERENCE_POOL_NAME
Replace the following:
MODEL_NAME
: the name of your base model or LoRA adapter. For example, food-review
.CRITICALITY
: the chosen serving criticality. Choose from Critical
, Standard
, or Sheddable
. For example, Standard
.INFERENCE_POOL_NAME
: the name of the InferencePool
you created in the previous step. For example, vllm-llama3-8b-instruct
.Apply the sample manifest to your cluster:
kubectl apply -f inferencemodel.yaml
The following example creates an InferenceModel
object that configures the food-review
LoRA model on the vllm-llama3-8b-instruct
InferencePool
with a Standard
serving criticality. The InferenceModel
object also configures the base model to be served with a Critical
priority level.
apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
name: food-review
spec:
modelName: food-review
criticality: Standard
poolRef:
name: vllm-llama3-8b-instruct
targetModels:
- name: food-review
weight: 100
---
apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
name: llama3-base-model
spec:
modelName: meta-llama/Llama-3.1-8B-Instruct
criticality: Critical
poolRef:
name: vllm-llama3-8b-instruct
Create the Gateway
The Gateway resource acts as the entry point for external traffic into your Kubernetes cluster. It defines the listeners that accept incoming connections.
GKE Inference Gateway supports the gke-l7-rilb
and gke-l7-regional-external-managed
Gateway Class. For more information, see the GKE documentation on Gateway Classes.
gke-l7-regional-external-managed
GatewayClass requires a proxy-only subnet. For more information, see create the proxy-only subnet.
To create a Gateway, perform the following steps:
Save the following sample manifest as gateway.yaml
:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: GATEWAY_NAME
spec:
gatewayClassName: gke-l7-regional-external-managed
listeners:
- protocol: HTTP # Or HTTPS for production
port: 80 # Or 443 for HTTPS
name: http
Replace GATEWAY_NAME
with a unique name for your Gateway resource. For example, inference-gateway
.
Apply the manifest to your cluster:
kubectl apply -f gateway.yaml
HTTPRoute
resource
In this section, you create an HTTPRoute
resource to define how the Gateway routes incoming HTTP requests to your InferencePool
.
The HTTPRoute resource defines how the GKE Gateway routes incoming HTTP requests to backend services, which is your InferencePool
. It specifies matching rules (for example, headers, or paths) and the backend to which traffic should be forwarded.
To create an HTTPRoute, perform the following steps:
Save the following sample manifest as httproute.yaml
:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: HTTPROUTE_NAME
spec:
parentRefs:
- name: GATEWAY_NAME
rules:
- matches:
- path:
type: PathPrefix
value: PATH_PREFIX
backendRefs:
- name: INFERENCE_POOL_NAME
group: inference.networking.x-k8s.io
kind: InferencePool
Replace the following:
HTTPROUTE_NAME
: a unique name for your HTTPRoute
resource. For example, my-route
.GATEWAY_NAME
: the name of the Gateway
resource that you created. For example, inference-gateway
.PATH_PREFIX
: the path prefix that you use to match incoming requests. For example, /
to match all.INFERENCE_POOL_NAME
: the name of the InferencePool
resource that you want to route traffic to. For example, vllm-llama3-8b-instruct
.Apply the manifest to your cluster:
kubectl apply -f httproute.yaml
After you have configured GKE Inference Gateway, you can send inference requests to your deployed model.
To send inference requests, perform the following steps:
curl
to send the request to the /v1/completions
endpoint.This lets you generate text based on your input prompt and specified parameters.
To get the Gateway endpoint, run the following command:
IP=$(kubectl get gateway/GATEWAY_NAME -o jsonpath='{.status.addresses[0].value}')
PORT=PORT_NUMBER # Use 443 for HTTPS, or 80 for HTTP
Replace the following:
GATEWAY_NAME
: the name of your Gateway resource.PORT_NUMBER
: the port number you configured in the Gateway.To send a request to the /v1/completions
endpoint using curl
, run the following command:
curl -i -X POST https://${IP}:${PORT}/v1/completions \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer $(gcloud auth print-access-token)' \
-d '{
"model": "MODEL_NAME",
"prompt": "PROMPT_TEXT",
"max_tokens": MAX_TOKENS,
"temperature": "TEMPERATURE"
}'
Replace the following:
MODEL_NAME
: the name of the model or LoRA adapter to use.PROMPT_TEXT
: the input prompt for the model.MAX_TOKENS
: the maximum number of tokens to generate in the response.TEMPERATURE
: controls the randomness of the output. Use the value 0
for deterministic output, or a higher number for more creative output.Be aware of the following behaviors:
stop
and top_p
. Refer to the OpenAI API specification for a complete list of options.curl
response. A non-200 status code generally indicates an error.Authorization
) in your requests.GKE Inference Gateway provides observability into the health, performance, and behavior of your inference workloads. This helps you to identify and resolve issues, optimize resource utilization, and ensure the reliability of your applications. You can view these observability metrics in Cloud Monitoring through the Metrics Explorer.
To configure observability for GKE Inference Gateway, see Configure observability.
Delete the deployed resourcesTo avoid incurring charges to your Google Cloud account for the resources that you created from this guide, run the following command:
gcloud container clusters delete CLUSTER_NAME \
--location=CONTROL_PLANE_LOCATION
Replace the following values:
CONTROL_PLANE_LOCATION
: the Compute Engine region of the control plane of your cluster.CLUSTER_NAME
: the name of your cluster.RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4