A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler below:

About GKE cluster autoscaling | Google Kubernetes Engine (GKE)

This page explains how Google Kubernetes Engine (GKE) automatically resizes your Standard cluster's node pools based on the demands of your workloads. When demand is high, the cluster autoscaler adds nodes to the node pool. To learn how to configure the cluster autoscaler, see Autoscaling a cluster.

This page is for Admins, Architects and Operators who plan capacity and infrastructure needs, and optimize systems architecture and resources to achieve the lowest total cost of ownership for their company or business unit. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE user roles and tasks.

With Autopilot clusters, you don't need to worry about provisioning nodes or managing node pools because node pools are automatically provisioned through node auto-provisioning, and are automatically scaled to meet the requirements of your workloads.

Before reading this page, ensure that you're familiar with basic Kubernetes concepts, and how resource requests and limits work.

Best practice:

Plan and design your cluster configuration with your organization's Admins and architects, Developers, or other team who is responsible for the implementation and maintenance of your application.

Why use cluster autoscaler

GKE's cluster autoscaler automatically resizes the number of nodes in a given node pool, based on the demands of your workloads. When demand is low, the cluster autoscaler scales back down to a minimum size that you designate. This can increase the availability of your workloads when you need it, while controlling costs. You don't need to manually add or remove nodes or over-provision your node pools. Instead, you specify a minimum and maximum size for the node pool, and the rest is automatic.

If resources are deleted or moved when autoscaling your cluster, your workloads might experience transient disruption. For example, if your workload consists of a controller with a single replica, that replica's Pod might be rescheduled onto a different node if its current node is deleted. Before enabling cluster autoscaler, design your workloads to tolerate potential disruption or ensure that critical Pods are not interrupted.

Best practice:

To increase your workload's tolerance to interruption, deploy your workload using a controller with multiple replicas, such as a Deployment.

You can increase the cluster autoscaler performance with Image streaming, which remotely streams required image data from eligible container images while simultaneously caching the image locally to allow workloads on new nodes to start faster.

How cluster autoscaler works

Cluster autoscaler works per node pool. When you configure a node pool with cluster autoscaler, you specify a minimum and maximum size for the node pool.

Cluster autoscaler increases or decreases the size of the node pool automatically by adding or removing virtual machine (VM) instances in the underlying Compute Engine Managed Instance Group (MIG) for the node pool. Cluster autoscaler makes these scaling decisions based on the resource requests (rather than actual resource utilization) of Pods running on that node pool's nodes. It periodically checks the status of Pods and nodes, and takes action:

The frequency at which cluster autoscaler inspects a cluster for unschedulable Pods largely depends on the cluster's size. In small clusters, the inspection might happen every few seconds. It is not possible to define an exact timeframe required for this inspection.

If your nodes are experiencing shortages because your Pods have requested or defaulted to insufficient resources, the cluster autoscaler does not correct the situation. You can help ensure cluster autoscaler works as accurately as possible by making explicit resource requests for all of your workloads.

Don't enable Compute Engine autoscaling for managed instance groups for your cluster nodes. GKE's cluster autoscaler is separate from Compute Engine autoscaling. This can lead to node pools failing to scale up or scale down because the Compute Engine autoscaler will be in conflict with GKE's cluster autoscaler.

Operating criteria

When resizing a node pool, the cluster autoscaler makes the following assumptions:

Best practice:

Don't enable the cluster autoscaler if your applications are not disruption-tolerant.

Balancing across zones

If your node pool contains multiple managed instance groups with the same instance type, the cluster autoscaler attempts to keep these managed instance group sizes balanced when scaling up. This helps prevent an uneven distribution of nodes among managed instance groups in multiple zones of a node pool. GKE does not consider the autoscaling policy when scaling down.

Cluster autoscaler only balances across zones during a scale-up event. Cluster autoscaler scales down underutilized nodes regardless of the relative sizes of underlying managed instance groups in a node pool, which can cause the nodes to be distributed unevenly across zones.

Location policy

Starting in GKE version 1.24.1-gke.800, you can change the location policy of the cluster autoscaler. You can control the cluster autoscaler distribution policy by specifying the location_policy flag with any of the following values:

Best practice:

Use the ANY policy if you are using Spot VMs or if you want to use VM reservations that are not equal between zones.

Reservations

Starting in GKE version 1.27, the cluster autoscaler always considers reservations when making the scale-up decisions. The node pools with matching unused reservations are prioritized when choosing the node pool to scale up, even when the node pool is not the most efficient one. Additionally, unused reservations are always prioritized when balancing multi-zonal scale-ups.

However, the cluster autoscaler checks for reservations only in its own project. As a result, if a less expensive node option is available within the cluster's own project, the autoscaler might select that option instead of the shared reservation. If you need to share reservations across projects, consider using custom compute classes, which let you configure the priority that the cluster autoscaler uses to scale nodes, including shared reservations.

Default values

For Spot VMs node pools, the default cluster autoscaler distribution policy is ANY. In this policy, Spot VMs have a lower risk of being preempted.

For non-preemptible node pools, the default cluster autoscaler distribution policy is BALANCED.

Minimum and maximum node pool size

When creating a new node pool, you can specify the minimum and maximum size for each node pool in your cluster, and the cluster autoscaler makes rescaling decisions within these scaling constraints. To update the minimum size, manually resize the cluster to a size within the new constraints after specifying the new minimum value. The cluster autoscaler then makes rescaling decisions based on the new constraints.

Current node pool size Cluster autoscaler action Scaling constraints Lower than the minimum you specified Cluster autoscaler scales up to provision pending pods. Scaling down is disabled. The node pool does not scale down below the value you specified. Within the minimum and maximum size you specified Cluster autoscaler scales up or down according to demand. The node pool stays within the size limits you specified. Greater than the maximum you specified Cluster autoscaler scales down only the nodes that can be safely removed. Scaling up is disabled. The node pool does not scale above the value you specified.

On Standard clusters, the cluster autoscaler never automatically scales down a cluster to zero nodes. One or more nodes must always be available in the cluster to run system Pods. Additionally, if the current number of nodes is zero due to manual removal of nodes, cluster autoscaler and node auto-provisioning can scale up from zero node clusters.

To learn more about autoscaler decisions, see cluster autoscaler limitations.

Autoscaling limits

You can set the minimum and maximum number of nodes for the cluster autoscaler to use when scaling a node pool. Use the --min-nodes and --max-nodes flags to set the minimum and maximum number of nodes per zone

Starting in GKE version 1.24, you can use the --total-min-nodes and --total-max-nodes flags for new clusters. These flags set the minimum and maximum number of the total number of nodes in the node pool across all zones.

Min and max nodes example

The following command creates an autoscaling multi-zonal cluster with six nodes across three zones initially, with a minimum of one node per zone and a maximum of four nodes per zone:

gcloud container clusters create example-cluster \
    --num-nodes=2 \
    --location=us-central1-a \
    --node-locations=us-central1-a,us-central1-b,us-central1-f \
    --enable-autoscaling --min-nodes=1 --max-nodes=4

In this example, the total size of the cluster can be between three and twelve nodes, spread across the three zones. If one of the zones fails, the total size of the cluster can be between two and eight nodes.

Total nodes example

The following command, available in GKE version 1.24 or later, creates an autoscaling multi-zonal cluster with six nodes across three zones initially, with a minimum of three nodes and a maximum of twelve nodes in the node pool across all zones:

gcloud container clusters create example-cluster \
    --num-nodes=2 \
    --location=us-central1-a \
    --node-locations=us-central1-a,us-central1-b,us-central1-f \
    --enable-autoscaling --total-min-nodes=3 --total-max-nodes=12

In this example, the total size of the cluster can be between three and twelve nodes, regardless of spreading between zones.

Autoscaling profiles

The decision of when to remove a node is a trade-off between optimizing for utilization or the availability of resources. Removing underutilized nodes improves cluster utilization, but new workloads might have to wait for resources to be provisioned again before they can run.

You can specify which autoscaling profile to use when making such decisions. The available profiles are:

The optimize-utilization autoscaling profile helps the cluster autoscaler to identify and remove underutilized nodes. To achieve this optimization, GKE sets the scheduler name in the Pod spec to gke.io/optimize-utilization-scheduler. Pods that specify a custom scheduler are not affected.

The following command enables optimize-utilization autoscaling profile in an existing cluster:

gcloud container clusters update CLUSTER_NAME \
    --autoscaling-profile optimize-utilization
Considering Pod scheduling and disruption

When scaling down, the cluster autoscaler respects scheduling and eviction rules set on Pods. These restrictions can prevent a node from being deleted by the autoscaler. A node's deletion could be prevented if it contains a Pod with any of these conditions:

For more information about cluster autoscaler and preventing disruptions, see the following questions in the Cluster autoscaler FAQ:

Autoscaling TPUs in GKE

GKE supports Tensor Processing Units (TPUs) to accelerate machine learning workloads. Both single-host TPU slice node pool and multi-host TPU slice node pool support autoscaling and auto-provisioning.

With the --enable-autoprovisioning flag on a GKE cluster, GKE creates or deletes single-host or multi-host TPU slice node pools with a TPU version and topology that meets the requirements of pending workloads.

When you use --enable-autoscaling, GKE scales the node pool based on its type, as follows:

Spot VMs and cluster autoscaler

Because cluster autoscaler prefers expanding the least expensive node pools, when your workloads and resource availability allow it, cluster autoscaler adds Spot VMs when scaling up.

However, even though cluster autoscaler prefers adding Spot VMs, this preference doesn't guarantee that the majority of your Pods will run on these types of VMs. Spot VMs can be preempted. Because of this preemption, Pods on Spot VMs are more likely to be evicted. When they're evicted, they only have 15 seconds to terminate.

For example, imagine a scenario where you have 10 Pods and a mixture of on-demand and Spot VMs:

To prioritize Spot VMs, and avoid the preceding scenario, we recommend that you use custom compute classes. Custom compute classes let you create priority rules that favor Spot VMs during scale-up by giving them higher priority than on-demand nodes. To further maximize the likelihood of your Pods running on nodes backed by Spot VMs, configure active migration.

The following example shows you one way to use custom compute classes to prioritize Spot VMs:

apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
  name: prefer-l4-spot
spec:
  priorities:
  - machineType: g2-standard-24
    spot: true
    gpu:
      type: nvidia-l4
      count: 2
  - machineType: g2-standard-24
    spot: false
    gpu:
      type: nvidia-l4
      count: 2
  nodePoolAutoCreation:
    enabled: true
  activeMigration:
    optimizeRulePriority: true

In the preceding example, the priority rule declares a preference for creating nodes with the g2-standard-24 machine type and Spot VMs. If Spot VMs aren't available, then GKE uses on-demand VMs as a fallback option. This compute class also enables activeMigration, enabling cluster autoscaler to migrate workloads to Spot VMs when the capacity becomes available.

If you can't use custom compute classes, add a node affinity, taint, or toleration. For example, the following node affinity rule declares a preference for scheduling Pods on nodes that are backed by Spot VMs (GKE automatically adds the cloud.google.com/gke-spot=true label to these types of nodes):

affinity:
  nodeAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
    - weight: 1
    preference:
      matchExpressions:
      - key: cloud.google.com/gke-spot
        operator: Equal
        values:
        - true

To learn more about using node affinities, taints, and tolerations to schedule Spot VMs, see the Running a GKE application on spot nodes with on-demand nodes as fallback blog.

ProvisioningRequest CRD

A ProvisioningRequest is a namespaced custom resource that lets users request capacity for a group of Pods from the cluster autoscaler. This is particularly useful for applications with interconnected pods that must be scheduled together as a single unit.

Supported Provisioning Classes

There are three supported ProvisioningClasses:

To learn more about the CheckCapacity and BestEffortAtomicScaleUp classes, refer to the open-source documentation.

Limitations when using ProvisioningRequest Best practices when using ProvisioningRequest Backoff periods

A scale-up operation can fail due to node creation errors such as insufficient quota or IP address exhaustion. When these errors occur, the underlying Managed Instance Group (MIG) retries the operation after an initial five-minute backoff. If errors continue, this backoff period increases exponentially to a maximum of 30 minutes. During this time, the cluster autoscaler can still scale up other node pools in the cluster that aren't experiencing errors.

Additional information

You can find more information about cluster autoscaler in the Autoscaling FAQ in the open-source Kubernetes project.

Limitations

Cluster autoscaler has the following limitations:

Known issues Troubleshooting

For troubleshooting advice, see the following pages:

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4