A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://cloud.google.com/kubernetes-engine/docs/concepts/about-custom-compute-classes below:

About custom compute classes | Google Kubernetes Engine (GKE)

This page describes how you can use custom compute classes to control the properties of the nodes that Google Kubernetes Engine (GKE) provisions when autoscaling your cluster. This document is intended for platform administrators who want to declaratively define autoscaling profiles for nodes, so that specific workloads run on hardware that meets their requirements.

Compute classes overview

In GKE, a compute class is a profile that consists of a set of node attributes that GKE uses to provision the nodes that run your workloads during autoscaling events. Compute classes can target specific optimizations, like provisioning high-performance nodes or prioritizing cost-optimized configurations for cheaper running costs. Custom compute classes let you define profiles that GKE then uses to autoscale nodes to closely meet the requirements of specific workloads.

Custom compute classes are available to use in GKE Autopilot mode and GKE Standard mode in version 1.30.3-gke.1451000 and later, and offer a declarative approach to defining node attributes and autoscaling priorities. Custom compute classes are available to configure and use in all eligible GKE clusters by default.

Benefits of custom compute classes

Custom compute classes offer the following benefits:

Use cases for custom compute classes

Consider using custom compute classes in scenarios like the following:

How custom compute classes work

Custom compute classes are Kubernetes custom resources that provision Google Cloud infrastructure. You define a ComputeClass object in the cluster, and then request that compute class in workloads or set that compute class as the default for a Kubernetes namespace. When a matching workload demands new infrastructure, GKE provisions new nodes in line with the priorities that you set in your compute class definition.

The attributes that you set in your compute classes define how GKE configures new nodes to run workloads. When you modify an existing compute class, all future nodes that GKE creates for that compute class use the modified configuration. GKE doesn't retroactively change the configuration of existing nodes to match your modifications.

To ensure that your custom compute classes are optimized for your fleet, consider the following guidelines:

View the complete custom resource definition

To view the latest custom resource definition (CRD) for the ComputeClass custom resource, including all fields and their relationships, refer to the ComputeClass reference documentation.

You can also view the CRD in your cluster by running the following command:

kubectl describe crd computeclasses.cloud.google.com
Plan a custom compute class

To effectively plan, deploy, and use a custom compute class in your cluster, you do the following steps:

  1. Choose your fallback compute priorities: Define a series of rules that govern the properties of the nodes that GKE creates for the compute class.
  2. Configure GKE Standard node pools and compute classes: For Standard mode clusters, perform required configuration steps to use the compute class with your node pools.
  3. Define scaling behavior when no priority rules apply: optionally, tell GKE what to do if nodes that meet your priority rules can't be provisioned.
  4. Set autoscaling parameters for node consolidation: tell GKE when to consolidate workloads and remove underutilized nodes.
  5. Configure active migration to higher priority nodes: optionally, tell GKE to move workloads to more preferred nodes as hardware becomes available.
  6. Consume Compute Engine reservations: optionally, tell GKE to consume existing Compute Engine zonal reservations when creating new nodes.
Choose your fallback compute priorities

The primary advantage of using a custom compute class is to have control over the fallback strategy when your preferred nodes are unavailable due to factors like resource exhaustion and quota limitations.

You create a fallback strategy by defining a list of priority rules in your custom compute class. When a cluster needs to scale up, GKE prioritizes creating nodes that match the first priority rule. If GKE can't create those nodes, it falls back to the next priority rule, repeating this process until GKE successfully scales up the cluster or exhausts all the rules. If all the rules are exhausted, GKE creates nodes based on the default or specified behavior described in Define scaling behavior when no priority rules apply.

Different Compute Engine machine series support different technologies and features. Earlier generations of a machine series might not support the same storage types as newer generations. If you run stateful workloads that rely on persistent data, avoid using a compute class that spans multiple generations of a machine series. The workloads might not be able to access the persistent data if GKE places them on a machine type that doesn't support that storage type. For details, filter the machine series comparison table for specific storage types.

Priority rules

You define priority rules in the spec.priorities field of the ComputeClass custom resource. Each rule in the priorities field describes the properties of the nodes to provision. GKE processes the priorities field in order, which means that the first item in the field is the highest priority for node provisioning.

There are two types of priority rules:

Declarative priority rules

With declarative priority rules, you can specify machine properties—like machine family or type, Spot VMs, accelerator options, storage options, reservations, and minimum resource requirements—for GKE to use when provisioning nodes. For the complete set of supported fields, see the ComputeClass CRD reference.

machineFamily configurations

The machineFamily field accepts a Compute Engine machine series like n2 or c3. If unspecified, the default is e2.

You can use other spec.priorities fields alongside the machineFamily field to declaratively define your compute requirements, for example:

The following example shows a priority rule that uses machineFamily:

priorities:
- machineFamily: n2
  spot: true
  minCores: 16
  minMemoryGb: 64
  storage:
    bootDiskKMSKey: projects/example/locations/us-central1/keyRings/example/cryptoKeys/key-1
    secondaryBootDisks:
    - diskImageName: pytorch-mnist
      project: k8s-staging-jobset
machineType configurations

The machineType field accepts a Compute Engine predefined machine type, like n2-standard-32.

You can specify other spec.priorities fields alongside the machineType field to declaratively define your compute requirements, for example:

The following example shows a priority rule that uses machineType to provision n2-standard-32 machine types:

priorities:
- machineType: n2-standard-32
  spot: true
  storage:
    bootDiskType: pd-balanced
    bootDiskSize: 250
    localSSDCount: 2
    bootDiskKMSKey: projects/example/locations/us-central1/keyRings/example/cryptoKeys/key-1
GPU configuration

To select GPUs in your priority rules, specify the type, count, and driverVersion (optional) of the GPU in the gpu field of a priority rule. The following fields are supported:

You can also specify other spec.priorities fields such as Spot VMs, storage options, and reservations in combination with the gpu fields.

The following example shows a rule for GPUs:

priorities:
- gpu:
    type: nvidia-l4
    count: 1
  storage:
    secondaryBootDisks:
    - diskImageName: big-llm
      project: k8s-llm
  spot: true
TPU configuration

Requires GKE version 1.31.2-gke.1518000 or later

To select TPUs in your priority rules, specify the type, count, and topology of the TPU in the tpu field of a priority rule. The following fields are required:

You can also specify other spec.priorities fields alongside the tpu field in your priority rule, for example:

The following example shows a rule for TPUs:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: tpu-class
spec:
  priorities:
  - tpu:
      type: tpu-v5p-slice
      count: 4
      topology: 4x4x4
    reservations:
      specific:
      - name: tpu-reservation
        project: reservation-project
      affinity: Specific
  - spot: true
    tpu:
      type: tpu-v5p-slice
      count: 4
      topology: 4x4x4
  nodePoolAutoCreation:
    enabled: true

This example defines the following fallback behavior:

  1. GKE attempts to provision a 16-node multi-host TPU v5p slice by consuming a shared Compute Engine reservation named tpu-reservation from the reservation-project project.
  2. If the reservation has no available TPUs, GKE attempts to provision a 16-node multi-host TPU v5p slice running on Spot VMs.
  3. If none of the preceding rules can be satisfied, GKE follows the logic in the Define scaling behavior when no priority rules apply section.

After you deploy a TPU custom compute class to your cluster, select that custom compute class in your workload:

Additionally, for TPU workloads you can do the following:

Accelerators and machine shape specifications

Declarative accelerator configurations don't require the machineType or machineFamily field to be explicitly specified unless you use them in combination with reservations.

Node pools priority rules

The nodepools field takes a list of existing node pools on which GKE attempts to create pending Pods. GKE doesn't process the values in this field in order. You can't use other spec.priorities fields alongside the nodepools field in the same priority rule item because rules with the nodepools field are not declarative in nature. This field is supported only on GKE Standard mode. For usage details, see Target specific node pools in a compute class definition.

How GKE creates nodes using priority rules

When you deploy a workload that requests a compute class and a new node is needed, GKE processes the list of rules in the priorities field of the ComputeClass specification in order.

For example, consider the following specification:

spec:
  ...
  priorities:
  - machineFamily: n2
    spot: true
    minCores: 64
  - machineFamily: n2
    spot: true
  - machineFamily: n2
    spot: false

When you deploy a workload that requests a compute class with these priority rules, GKE matches nodes as follows:

  1. GKE places Pods on any existing nodes that are associated with this compute class.
  2. If existing nodes can't accommodate the Pods, GKE provisions new nodes that use the N2 machine series, are Spot VMs, and have at least 64 vCPU.
  3. If N2 Spot VMs with at least 64 vCPU aren't available in the region, GKE provisions new nodes that use N2 Spot VMs that can fit the Pods, regardless of the number of cores.
  4. If no N2 Spot VMs are available in the region, GKE provisions new on-demand N2 VMs.
  5. If none of the preceding rules can be satisfied, GKE follows the logic in the Define scaling behavior when no priority rules apply section.
Default values for priority rules

You can set default values for some of the fields in the priority rules of your ComputeClass specification. These default values apply if the corresponding fields in a specific rule are omitted. You can set these default values by using the priorityDefaults field in your ComputeClass specification.

The priorityDefaults field has the following limitations:

For details about the types of default values that you can set, see the priorityDefaults section in the ComputeClass CustomResourceDefinition.

GKE Standard node pools and compute classes

If you use GKE Standard mode, you might have to perform manual configuration to ensure that your compute class Pods schedule as expected.

Configure manually-created node pools for compute class use

If your GKE Standard clusters have node pools that you manually created without node auto-provisioning, you must configure those node pools to associate them with specific compute classes. GKE only schedules Pods that request a specific compute class on nodes in node pools that you associate with that compute class. This requirement doesn't apply to a compute class that you configure as the cluster-level default.

GKE Autopilot mode and GKE Standard mode node pools that were created by node auto-provisioning automatically perform this configuration for you.

To associate a manually created node pool with a compute class, you add node labels and node taints to the node pool during creation or during an update by specifying the --node-labels flag and the --node-taints flag, as follows:

In these attributes, COMPUTE_CLASS is the name of your custom compute class.

For example, the following commands together update an existing node pool and associate the node pool with the dev-class compute class:

gcloud container node-pools update dev-pool \
    --cluster=example-cluster \
    --node-labels="cloud.google.com/compute-class=dev-class"

gcloud container node-pools update dev-pool \
    --cluster=example-cluster \
    --node-taints="cloud.google.com/compute-class=dev-class:NoSchedule"

You can associate each node pool in your cluster with one custom compute class. Pods that GKE schedules on these manually-created node pools only trigger node creation inside those node pools during autoscaling events.

Node auto-provisioning and compute classes

You can use node auto-provisioning with a custom compute class to let GKE automatically create and delete node pools based on your priority rules.

To use node auto-provisioning with a compute class, you must do the following:

  1. Ensure that you have node auto-provisioning enabled in your cluster.
  2. Add the nodePoolAutoCreation field with the enabled: true value to your ComputeClass specification.

GKE can then place Pods that use compute classes that configure node auto-provisioning on new node pools. GKE decides whether to scale up an existing node pool or create a new node pool based on factors like the size of the clusters and Pod requirements. Pods with compute classes that don't configure node auto-provisioning continue to only scale up existing node pools.

You can use compute classes that interact with node auto-provisioning alongside compute classes that interact with manually-created node pools in the same cluster.

Consider the following interactions with node auto-provisioning:

Consider the following example for a cluster that has both manually-created node pools and node auto-provisioning:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: my-class
spec:
  priorities:
  - nodepools: [manually-created-pool]
  - machineFamily: n2
  - machineFamily: n2d
  nodePoolAutoCreation:
    enabled: true

In this example, GKE attempts to do the following:

  1. Create new nodes in the manually-created-pool node pool.
  2. Provision N2 nodes, either in existing N2 node pools or by creating a new node pool.
  3. If GKE can't create N2 nodes, it attempts to scale up existing N2D node pools or create new N2D node pools.
Target specific node pools in a compute class definition

The priorities.nodepools field lets you specify a list of manually created node pools on which GKE attempts to schedule Pods in no specific order in GKE Standard clusters that use cluster autoscaling. This field only supports a list of node pools; you can't specify additional machine properties like the machine series in the same priority rule. When you deploy a workload that requests a compute class that has named node pools, GKE attempts to schedule the pending Pods in those node pools. GKE might create new nodes in those node pools to place the Pods.

The node pools that you specify in the priorities.nodepools field must be associated with that compute class by using node labels and node taints, as described in the Configure manually created node pools for compute classes section.

The list of node pools that you specify in the nodepools field has no priority. To configure a fallback order for named node pools, you must specify multiple separate priorities.nodepools items. For example, consider the following specification:

spec:
  ...
  priorities:
  - nodepools: [pool1, pool2]
  - nodepools: [pool3]

In this example, GKE first attempts to place pending Pods that request this compute class on existing nodes in node pools that are labeled with the compute class. If existing nodes aren't available, GKE tries to provision new nodes in pool1 or pool2. If GKE can't provision new nodes in these node pools, GKE attempts to provision new Pods in pool3.

Define scaling behavior when no priority rules apply

The ComputeClass custom resource lets you specify what GKE should do if there are no nodes that can meet any of the priority rules. The whenUnsatisfiable field in the specification supports the following values.

Best practice: If you omit this field, the default behavior depends on the GKE version of your cluster. To avoid unexpected scaling behavior after a version upgrade, explicitly specify this field in your compute class manifest.

Set autoscaling parameters for node consolidation

By default, GKE removes nodes that are underutilized by running workloads, consolidating those workloads on other nodes that have capacity. For all compute classes, this is the default behavior because all clusters that use compute classes must use the cluster autoscaler or are Autopilot clusters. During a node consolidation, GKE drains an underutilized node, recreates the workloads on another node, and then deletes the drained node.

The timing and criteria for node removal depends on the autoscaling profile. You can fine-tune the resource underutilization thresholds that trigger node removal and workload consolidation by using the autoscalingPolicy section in your custom compute class definition. You can fine-tune the following parameters:

Consider the following example:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: my-class
spec:
  priorities:
  - machineFamily: n2
  - machineFamily: n2d
  autoscalingPolicy:
    consolidationDelayMinutes: 5
    consolidationThreshold: 70

In this configuration, GKE removes unused nodes after five minutes, and nodes only become candidates for consolidation if both their CPU and memory utilization is less than 70%.

Configure active migration to higher priority nodes

Active migration is an optional autoscaling feature in custom compute classes that automatically replaces existing nodes that are lower in a compute class fallback priority list with new nodes that are higher in that priority list. This ensures that all your running Pods eventually run on your most preferred nodes for that compute class, even if GKE originally had to run those Pods on less preferred nodes.

When an active migration occurs, GKE creates new nodes based on the compute class priority rules, and then drains and deletes the obsolete lower priority nodes. The migration happens gradually to minimize workload disruption. Active migration has the following considerations:

Consider the following example compute class specification, which prioritizes N2 nodes over N2D nodes:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: my-class
spec:
  priorities:
  - machineFamily: n2
  - machineFamily: n2d
  activeMigration:
    optimizeRulePriority: true

If N2 nodes were unavailable when you deployed a Pod with this compute class, GKE would have used N2D nodes as a fallback option. If N2 nodes become available to provision later, like if your quota increases or if N2 VMs become available in your location, GKE creates a new N2 node and gradually migrates the Pod from the existing N2D node to the new N2 node. GKE then deletes the obsolete N2D node.

Consume Compute Engine reservations

Available in GKE version 1.31.1-gke.2105000 and later

If you use Compute Engine capacity reservations to get a higher level of assurance of hardware availability in specific Google Cloud zones, you can configure each fallback priority in your custom compute class so that GKE consumes reservations when creating new nodes.

Consuming reservations in custom compute classes has the following requirements:

Consider the following example compute class specification, which prioritizes a specific shared reservation for use when provisioning a3-highgpu-1g instances. If the prioritized instance types aren't available, GKE then falls back to any matching reservations in the specification:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: accelerator-reservations
spec:
  nodePoolAutoCreation:
    enabled: true
  priorities:
  - machineType: a3-highgpu-1g
    storage:
      localSSDCount: 2
    gpu:
      type: nvidia-h100-80gb
      count: 1
    reservations:
      specific:
      - name: a3-shared-reservation
        project: reservation-project
      affinity: Specific
  - machineType: a3-highgpu-1g
    storage:
      localSSDCount: 2
    gpu:
      type: nvidia-h100-80gb
      count: 1
    reservations:
      affinity: AnyBestEffort
  whenUnsatisfiable: DoNotScaleUp

If you deploy a Pod that uses the accelerator-reservations compute class, GKE first attempts to use the a3-shared-reservation reservation when creating new a3-highgpu-1g instances to run the Pod. If this specific reservation doesn't have available capacity, GKE tries to scale up a3-highgpu-1g instances by using any matching reservation. If no reservations are accessible, GKE falls back to a3-highgpu-1g Spot VMs. Finally, if no Spot VMs are available, the scale operation fails.

In this example, both priority rules with reservation references explicitly require the localSSDCount: field because the a3-highgpu-1g machine shape includes local SSDs.

The following example shows a shared specific reservation, which falls back to Spot VMs, and then finally to on-demand VMs:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: shared-specific-reservations
spec:
  nodePoolAutoCreation:
    enabled: true
  priorities:
  - machineFamily: n2
    reservations:
      specific:
      - name: n2-shared-reservation
        project: reservation-project
      affinity: Specific
  - machineFamily: n2
    spot: true
  - machineFamily: n2
  whenUnsatisfiable: DoNotScaleUp

You can consume the following types of reservations:

TPU reservations require Specific affinity. reservations.affinity: AnyBestEffort is not supported.

If GKE can't find available capacity in a reservation, the resulting behavior depends on the type of reservation being selected in the compute class priority rule, as follows:

If GKE can't meet the requirements of any of the priority rules for the compute class, the behavior when no rules apply occurs.

Consume specific reservation blocks

Starting with GKE version 1.31.4-gke.1072000, you can target a specific reservation block within a hardware-backed reservation. This feature is available for the A3 Ultra and A4 machine types.

To consume a specific reservation block, configure your ComputeClass resource as shown in this example:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: specific-reservations
spec:
  nodePoolAutoCreation:
    enabled: true
  priorities:
  - machineFamily: a3
    gpu:
      type: nvidia-h200-141gb
      count: 8
    reservations:
      specific:
      - name: a3ultra-specific-reservation
        reservationBlock:
          name: RESERVATION_BLOCK_NAME
      affinity: Specific

Replace RESERVATION_BLOCK_NAME with the target reservation block name.

Starting with GKE version 1.33.1-gke.1788000, you can target a specific reservation sub-block within a reservation block. This feature is available for the A4X machine type.

To consume a specific reservation sub-block, configure your ComputeClass resource as shown in the example in Consume specific reservation sub-blocks.

When you use this feature, be aware of these considerations:

Customize the node system configuration

You can customize certain parameters in the kubelet and the Linux kernel by using the nodeSystemConfig field in your ComputeClass specification. You can specify this field in any priority rule that defines a Compute Engine machine series or machine type. You can also set default global values for any node system configuration fields that are omitted in priority rules by adding the nodeSystemConfig field to the priorityDefaults field in your compute class.

This feature is available in GKE version 1.32.1-gke.1729000 and later.

For more information, see the following pages:

Default compute classes for clusters and namespaces

You can configure GKE to apply a compute class by default to Pods that don't select a specific compute class. You can define a default compute class for specific namespaces or for an entire cluster. For more information about how to configure your clusters or namespaces with a default class, see Apply compute classes to Pods by default.

Group node pools

Starting with GKE version 1.32.2-gke.1359000, you can group multiple node pools into a single logical unit called a collection by using the nodePoolGroup field in your ComputeClass specification. This grouping lets you apply shared configurations across many node pools.

TPU multi-host collection

You can group your TPU multi-host deployment to set a Service Level Objective (SLO) across all node pools within the collection. To group node pools, specify the name of the group in the nodePoolGroup field. All node pools provisioned using this compute class belong to the same group.

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: tpu-multi-host-collection
spec:
  nodePoolGroup:
    name: my-tpu-collection
  ...

For more information, see the following:

Node pool configuration

The nodePoolConfig field in your ComputeClass specification lets you apply configuration that is reflected in all nodes within the node pools created using that class.

Specify image type

You can specify the base operating system for the nodes in the node pool by using the imageType field. This field lets you choose an image type for the node pools that will run on the nodes. If you omit this field, the default value is cos_containerd. The following example shows how to specify the imageType in your ComputeClass:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: my-node-pool-config
spec:
  nodePoolConfig:
    imageType: cos_containerd

For more information, see the Node images.

Service account

The serviceAccount field specifies the Google Cloud service account used by the nodes within node pools that are managed by the compute class. The following example shows how to specify the serviceAccount in your ComputeClass:

spec:
  nodePoolConfig:
    serviceAccount: my-service-account@my-project.iam.gserviceaccount.com

For more information, see About service accounts in GKE.

Define workload type for TPU SLO

Starting with GKE version 1.32.2-gke.1359000, you can define the Service Level Objective (SLO) for your TPU workloads by using the workloadType field within nodePoolConfig. The value in this field tells GKE the intended use for the TPU resources. The workloadType field supports the following values:

The following example defines a compute class for a multi-host TPU collection optimized for high-availability inference workloads.

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: multi-host-inference
spec:
  nodePoolGroup:
    name: my-inference-collection
  nodePoolConfig:
    workloadType: HIGH_AVAILABILITY
  nodePoolAutoCreation:
    enabled: true
  priorities:
  - tpu:
      type: tpu-v6e-slice
      topology: 2x4

For more information, see the following pages:

Request compute classes in workloads

To use a custom compute class, your Pod must explicitly request that compute class by using a nodeSelector in the Pod specification. You can optionally set a compute class as the default for a specific Kubernetes namespace. Pods in that namespace use that compute class unless the Pods request a different compute class.

For example, the following manifest requests the cost-optimized compute class:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: custom-workload
spec:
  replicas: 2
  selector:
    matchLabels:
      app: custom-workload
  template:
    metadata:
      labels:
        app: custom-workload
    spec:
      nodeSelector:
        cloud.google.com/compute-class: cost-optimized
      containers:
      - name: test
        image: gcr.io/google_containers/pause
        resources:
          requests:
            cpu: 1.5
            memory: "4Gi"
Node selectors for system node labels

GKE adds system labels to nodes to identify nodes by criteria like the machine type, attached hardware accelerators, or the boot disk type. These system labels have one of the following prefixes in the label key:

In GKE version 1.32.3-gke.1499000 and later, you can deploy workloads that use a node selector to select system labels and a compute class at the same time. If you select system labels in Pods that select compute classes, verify that those Pods schedule as expected. A conflict between the configuration of a compute class and the node selectors in a Pod might result in issues like the following:

GKE also rejects any Pods that select system labels that have a corresponding field in the ComputeClass specification. When you use compute classes, update your workloads to remove the following labels from node selectors and configure the corresponding field in the compute classes that you create:

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4