This page describes the built-in compute classes that are available in Google Kubernetes Engine (GKE) Autopilot clusters for workloads that have specific hardware requirements.
Overview of built-in compute classes in Autopilot clustersBy default, Pods in GKE Autopilot clusters run on a container-optimized compute platform. This platform is ideal for general-purpose workloads such as web servers and medium-intensity batch jobs. The container-optimized compute platform provides a reliable, scalable, cost-optimized hardware configuration that can handle the requirements of most workloads.
If you have workloads that have unique hardware requirements, such as performing machine learning or AI tasks, running real-time high traffic databases, or needing specific CPU platforms and architecture, Autopilot offers compute classes. These compute classes are a curated subset of the Compute Engine machine series, and offer flexibility beyond the default Autopilot compute platform. For example, the Scale-Out
compute class uses VMs that turn off simultaneous multi-threading and are optimized for scaling out.
You can request nodes backed by specific compute classes based on the requirements of each of your workloads. Similar to the default general-purpose container-optimized compute platform, Autopilot manages the sizing and resource allocation of your requested compute classes based on your running Pods. You can request compute classes at the Pod-level to optimize cost-efficiency by choosing the best fit for each Pod's needs.
Custom compute classes for additional flexibilityIf the built-in compute classes in Autopilot clusters don't meet your workload requirements, you can configure your own compute classes instead. You can deploy ComputeClass
Kubernetes CustomResources to your clusters with sets of node attributes that GKE uses to configure new nodes in the cluster. For more information about custom compute classes, see About custom compute classes.
If your workloads are designed for specific CPU platforms or architectures, you can optionally select those platforms or architectures in your Pod specifications. For example, if you want your Pods to run on nodes that use the Arm architecture, you can choose arm64
within the Scale-Out
compute class.
GKE Autopilot Pods are priced based on the nodes where the Pods are scheduled. For pricing information for general-purpose workloads and Spot Pods on specific compute classes, and for information on any committed use discounts, refer to Autopilot mode pricing.
Spot Pods on general-purpose or specialized compute classes don't qualify for committed use discounts.
When to use specific compute classesThe following table provides a technical overview of the predefined compute classes that Autopilot supports and example use cases for Pods running on each platform. If you don't request a compute class, Autopilot places your Pods on the general-purpose compute platform, which is designed to run most workloads optimally.
If none of these options meet your requirements, you can define and deploy your own custom compute classes that specify node properties for GKE to use when scaling up your cluster. For details, see About custom compute classes.
Note: The machine types that back each compute class might change over time. Workload requirement Compute class Description Example use cases Workloads that don't require specific hardware General-purposeAutopilot uses the general-purpose compute platform if you don't explicitly request a compute class in your Pod specification.
You can't explicitly select the general-purpose platform in your specification.
Backed by the E2 machine series.
You might sometimes see ek
as the node machine series in your Autopilot nodes. EK machines are E2 machine types that are optimized for GKE Autopilot.
Accelerator
Compatible GPU types are the following:
nvidia-b200
: NVIDIA B200 (180GB)nvidia-h200-141gb
: NVIDIA H200 (141GB)nvidia-h100-mega-80gb
: NVIDIA H100 Mega (80GB)nvidia-h100-80gb
: NVIDIA H100 (80GB)nvidia-a100-80gb
: NVIDIA A100 (80GB)nvidia-tesla-a100
: NVIDIA A100 (40GB)nvidia-l4
: NVIDIA L4nvidia-tesla-t4
: NVIDIA T4Balanced
Backed by the N2 machine series (Intel) or the N2D machine series (AMD).
For details, see Optimize Autopilot Pod performance by choosing a machine series.
Performance
For a list of Compute Engine machine series available with the Performance compute class, see Supported machine series.
Scale-Out
Backed by the Tau T2A machine series (Arm) or the Tau T2D machine series (x86).
For detailed instructions, refer to Choose compute classes for Autopilot Pods.
To tell Autopilot to place your Pods on a specific compute class, specify the
cloud.google.com/compute-class
label in a
nodeSelectoror a
node affinity rule, such as in the following examples:
nodeSelectorapiVersion: apps/v1 kind: Deployment metadata: name: hello-app spec: replicas: 3 selector: matchLabels: app: hello-app template: metadata: labels: app: hello-app spec: nodeSelector: cloud.google.com/compute-class: "COMPUTE_CLASS" containers: - name: hello-app image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0 resources: requests: cpu: "2000m" memory: "2Gi"
Replace COMPUTE_CLASS
with the name of the compute class based on your use case, such as Scale-Out
. If you select Accelerator
, you must also specify a compatible GPU. For instructions, see Deploy GPU workloads in Autopilot. If you select Performance
, you can optionally select a Compute Engine machine series in the node selector. If you don't specify a machine series, GKE uses the C4 machine series depending on regional availability. For instructions, see Run CPU-intensive workloads with optimal performance.
apiVersion: apps/v1 kind: Deployment metadata: name: hello-app spec: replicas: 3 selector: matchLabels: app: hello-app template: metadata: labels: app: hello-app spec: terminationGracePeriodSeconds: 25 containers: - name: hello-app image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0 resources: requests: cpu: "2000m" memory: "2Gi" ephemeral-storage: "1Gi" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: cloud.google.com/compute-class operator: In values: - "COMPUTE_CLASS"
Replace COMPUTE_CLASS
with the name of the compute class based on your use case, such as Scale-Out
. If you select Accelerator
, you must also specify a compatible GPU. For instructions, see Deploy GPU workloads in Autopilot. If you select Performance
, you can optionally select a Compute Engine machine series in the node selector. If you don't specify a machine series, GKE uses the C4 machine series depending on regional availability. For instructions, see Run CPU-intensive workloads with optimal performance.
When you deploy the workload, Autopilot does the following:
For example, if you request the Scale-Out
compute class for a Pod:
Scale-Out
for those nodes.Scale-Out
Pods.Pods that don't request Scale-Out
won't get the toleration. As a result, GKE won't schedule those Pods on the Scale-Out
nodes.
If you don't explicitly request a compute class in your workload specification, Autopilot schedules Pods on nodes that use the default general-purpose compute class. Most workloads can run with no issues on the general-purpose compute class.
How to request a CPU architectureIn some cases, your workloads might be built for a specific architecture, such as Arm. Some compute classes, such as Balanced or Scale-Out, support multiple CPU architectures. You can request a specific architecture alongside your compute class request by specifying a label in your node selector or node affinity rule, such as in the following example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-arm
spec:
replicas: 3
selector:
matchLabels:
app: nginx-arm
template:
metadata:
labels:
app: nginx-arm
spec:
nodeSelector:
cloud.google.com/compute-class: COMPUTE_CLASS
kubernetes.io/arch: ARCHITECTURE
containers:
- name: nginx-arm
image: nginx
resources:
requests:
cpu: 2000m
memory: 2Gi
Replace ARCHITECTURE
with the CPU architecture that you want, such as arm64
or amd64
.
If you don't explicitly request an architecture, Autopilot uses the default architecture of the specified compute class.
Arm architecture on AutopilotAutopilot supports requests for nodes that use the Arm CPU architecture. Arm nodes are more cost-efficient than similar x86 nodes while delivering performance improvements. For instructions to request Arm nodes, refer to Deploy Autopilot workloads on Arm architecture.
Ensure that you're using the correct images in your deployments. If your Pods use Arm images and you don't request Arm nodes, Autopilot schedules the Pods on x86 nodes and the Pods will crash. Similarly, if you accidentally use x86 images but request Arm nodes for the Pods, the Pods will crash.
Autopilot validations for compute class workloadsAutopilot validates your workload manifests to ensure that the compute class and architecture requests in your node selector or node affinity rules are correctly formatted. The following rules apply:
If your workload manifest fails any of these validations, Autopilot rejects the workload.
Compute class regional availabilityThe following table describes the regions in which specific compute classes and CPU architectures are available:
Compute class availability General-purpose All regionsBalanced
All regions Performance
All regions that contain a supported machine series. Scale-Out
All regions that contain a corresponding Compute Engine machine series. To view specific machine series availability, use the filters in Available regions and zones.
If a compute class is available in a specific region, the hardware is available in at least two zones in that region.
Default, minimum, and maximum resource requestsWhen choosing a compute class for your Autopilot workloads, make sure that you specify resource requests that meet the minimum and maximum requests for that compute class. For information about the default requests, as well as the minimum and maximum requests for each compute class, refer to Resource requests and limits in GKE Autopilot.
What's nextRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4