This page explains how Google Kubernetes Engine (GKE) automatically resizes your Standard cluster's node pools based on the demands of your workloads. When demand is high, the cluster autoscaler adds nodes to the node pool. To learn how to configure the cluster autoscaler, see Autoscaling a cluster.
This page is for Admins, Architects and Operators who plan capacity and infrastructure needs, and optimize systems architecture and resources to achieve the lowest total cost of ownership for their company or business unit. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE user roles and tasks.
With Autopilot clusters, you don't need to worry about provisioning nodes or managing node pools because node pools are automatically provisioned through node auto-provisioning, and are automatically scaled to meet the requirements of your workloads.
Before reading this page, ensure that you're familiar with basic Kubernetes concepts, and how resource requests and limits work.
Best practice:Plan and design your cluster configuration with your organization's Admins and architects, Developers, or other team who is responsible for the implementation and maintenance of your application.
Why use cluster autoscalerGKE's cluster autoscaler automatically resizes the number of nodes in a given node pool, based on the demands of your workloads. When demand is low, the cluster autoscaler scales back down to a minimum size that you designate. This can increase the availability of your workloads when you need it, while controlling costs. You don't need to manually add or remove nodes or over-provision your node pools. Instead, you specify a minimum and maximum size for the node pool, and the rest is automatic.
If resources are deleted or moved when autoscaling your cluster, your workloads might experience transient disruption. For example, if your workload consists of a controller with a single replica, that replica's Pod might be rescheduled onto a different node if its current node is deleted. Before enabling cluster autoscaler, design your workloads to tolerate potential disruption or ensure that critical Pods are not interrupted.
Best practice:To increase your workload's tolerance to interruption, deploy your workload using a controller with multiple replicas, such as a Deployment.
You can increase the cluster autoscaler performance with Image streaming, which remotely streams required image data from eligible container images while simultaneously caching the image locally to allow workloads on new nodes to start faster.
How cluster autoscaler worksCluster autoscaler works per node pool. When you configure a node pool with cluster autoscaler, you specify a minimum and maximum size for the node pool.
Cluster autoscaler increases or decreases the size of the node pool automatically by adding or removing virtual machine (VM) instances in the underlying Compute Engine Managed Instance Group (MIG) for the node pool. Cluster autoscaler makes these scaling decisions based on the resource requests (rather than actual resource utilization) of Pods running on that node pool's nodes. It periodically checks the status of Pods and nodes, and takes action:
Active
.The frequency at which cluster autoscaler inspects a cluster for unschedulable Pods largely depends on the cluster's size. In small clusters, the inspection might happen every few seconds. It is not possible to define an exact timeframe required for this inspection.
If your nodes are experiencing shortages because your Pods have requested or defaulted to insufficient resources, the cluster autoscaler does not correct the situation. You can help ensure cluster autoscaler works as accurately as possible by making explicit resource requests for all of your workloads.
Don't enable Compute Engine autoscaling for managed instance groups for your cluster nodes. GKE's cluster autoscaler is separate from Compute Engine autoscaling. This can lead to node pools failing to scale up or scale down because the Compute Engine autoscaler will be in conflict with GKE's cluster autoscaler.
Operating criteriaWhen resizing a node pool, the cluster autoscaler makes the following assumptions:
--node-labels
at the time of node pool creation.Don't enable the cluster autoscaler if your applications are not disruption-tolerant.
Balancing across zonesIf your node pool contains multiple managed instance groups with the same instance type, the cluster autoscaler attempts to keep these managed instance group sizes balanced when scaling up. This helps prevent an uneven distribution of nodes among managed instance groups in multiple zones of a node pool. GKE does not consider the autoscaling policy when scaling down.
Cluster autoscaler only balances across zones during a scale-up event. Cluster autoscaler scales down underutilized nodes regardless of the relative sizes of underlying managed instance groups in a node pool, which can cause the nodes to be distributed unevenly across zones.
Location policyStarting in GKE version 1.24.1-gke.800, you can change the location policy of the cluster autoscaler. You can control the cluster autoscaler distribution policy by specifying the location_policy
flag with any of the following values:
BALANCED
: The cluster autoscaler considers Pod requirements and the availability of resources in each zone. This does not guarantee similar node groups will have exactly the same sizes, because the cluster autoscaler considers many factors, including available capacity in a given zone and zone affinities of Pods that triggered the scale-up.ANY
: The cluster autoscaler prioritizes utilization of unused reservations and accounts for current constraints of available resources.Use the ANY
policy if you are using Spot VMs or if you want to use VM reservations that are not equal between zones.
Starting in GKE version 1.27, the cluster autoscaler always considers reservations when making the scale-up decisions. The node pools with matching unused reservations are prioritized when choosing the node pool to scale up, even when the node pool is not the most efficient one. Additionally, unused reservations are always prioritized when balancing multi-zonal scale-ups.
However, the cluster autoscaler checks for reservations only in its own project. As a result, if a less expensive node option is available within the cluster's own project, the autoscaler might select that option instead of the shared reservation. If you need to share reservations across projects, consider using custom compute classes, which let you configure the priority that the cluster autoscaler uses to scale nodes, including shared reservations.
Default valuesFor Spot VMs node pools, the default cluster autoscaler distribution policy is ANY
. In this policy, Spot VMs have a lower risk of being preempted.
For non-preemptible node pools, the default cluster autoscaler distribution policy is BALANCED
.
When creating a new node pool, you can specify the minimum and maximum size for each node pool in your cluster, and the cluster autoscaler makes rescaling decisions within these scaling constraints. To update the minimum size, manually resize the cluster to a size within the new constraints after specifying the new minimum value. The cluster autoscaler then makes rescaling decisions based on the new constraints.
Current node pool size Cluster autoscaler action Scaling constraints Lower than the minimum you specified Cluster autoscaler scales up to provision pending pods. Scaling down is disabled. The node pool does not scale down below the value you specified. Within the minimum and maximum size you specified Cluster autoscaler scales up or down according to demand. The node pool stays within the size limits you specified. Greater than the maximum you specified Cluster autoscaler scales down only the nodes that can be safely removed. Scaling up is disabled. The node pool does not scale above the value you specified.On Standard clusters, the cluster autoscaler never automatically scales down a cluster to zero nodes. One or more nodes must always be available in the cluster to run system Pods. Additionally, if the current number of nodes is zero due to manual removal of nodes, cluster autoscaler and node auto-provisioning can scale up from zero node clusters.
To learn more about autoscaler decisions, see cluster autoscaler limitations.
Autoscaling limitsYou can set the minimum and maximum number of nodes for the cluster autoscaler to use when scaling a node pool. Use the --min-nodes
and --max-nodes
flags to set the minimum and maximum number of nodes per zone
Starting in GKE version 1.24, you can use the --total-min-nodes
and --total-max-nodes
flags for new clusters. These flags set the minimum and maximum number of the total number of nodes in the node pool across all zones.
Min and max nodes example
The following command creates an autoscaling multi-zonal cluster with six nodes across three zones initially, with a minimum of one node per zone and a maximum of four nodes per zone:
gcloud container clusters create example-cluster \
--num-nodes=2 \
--location=us-central1-a \
--node-locations=us-central1-a,us-central1-b,us-central1-f \
--enable-autoscaling --min-nodes=1 --max-nodes=4
In this example, the total size of the cluster can be between three and twelve nodes, spread across the three zones. If one of the zones fails, the total size of the cluster can be between two and eight nodes.
Total nodes example
The following command, available in GKE version 1.24 or later, creates an autoscaling multi-zonal cluster with six nodes across three zones initially, with a minimum of three nodes and a maximum of twelve nodes in the node pool across all zones:
gcloud container clusters create example-cluster \
--num-nodes=2 \
--location=us-central1-a \
--node-locations=us-central1-a,us-central1-b,us-central1-f \
--enable-autoscaling --total-min-nodes=3 --total-max-nodes=12
In this example, the total size of the cluster can be between three and twelve nodes, regardless of spreading between zones.
Autoscaling profilesThe decision of when to remove a node is a trade-off between optimizing for utilization or the availability of resources. Removing underutilized nodes improves cluster utilization, but new workloads might have to wait for resources to be provisioned again before they can run.
You can specify which autoscaling profile to use when making such decisions. The available profiles are:
balanced
: The default profile that prioritizes keeping more resources readily available for incoming pods and thus reducing the time needed for having them active for Standard clusters. The balanced
profile isn't available for Autopilot clusters.optimize-utilization
: Prioritize optimizing utilization over keeping spare resources in the cluster. When you enable this profile, the cluster autoscaler scales down the cluster more aggressively. GKE can remove more nodes, and remove nodes faster. GKE prefers to schedule Pods in nodes that already have high allocation of CPU, memory, or GPUs. However, other factors influence scheduling, such as spread of Pods belonging to the same Deployment, StatefulSet or Service, across nodes.The optimize-utilization
autoscaling profile helps the cluster autoscaler to identify and remove underutilized nodes. To achieve this optimization, GKE sets the scheduler name in the Pod spec to gke.io/optimize-utilization-scheduler
. Pods that specify a custom scheduler are not affected.
The following command enables optimize-utilization
autoscaling profile in an existing cluster:
gcloud container clusters update CLUSTER_NAME \
--autoscaling-profile optimize-utilization
Considering Pod scheduling and disruption
When scaling down, the cluster autoscaler respects scheduling and eviction rules set on Pods. These restrictions can prevent a node from being deleted by the autoscaler. A node's deletion could be prevented if it contains a Pod with any of these conditions:
"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
annotation.For more information about cluster autoscaler and preventing disruptions, see the following questions in the Cluster autoscaler FAQ:
GKE supports Tensor Processing Units (TPUs) to accelerate machine learning workloads. Both single-host TPU slice node pool and multi-host TPU slice node pool support autoscaling and auto-provisioning.
With the --enable-autoprovisioning
flag on a GKE cluster, GKE creates or deletes single-host or multi-host TPU slice node pools with a TPU version and topology that meets the requirements of pending workloads.
When you use --enable-autoscaling
, GKE scales the node pool based on its type, as follows:
Single-host TPU slice node pool: GKE adds or removes TPU nodes in the existing node pool. The node pool may contain any number of TPU nodes between zero and the maximum size of the node pool as determined by the --max-nodes and the --total-max-nodes flags. When the node pool scales, all the TPU nodes in the node pool have the same machine type and topology. To learn more how to create a single-host TPU slice node pool, see Create a node pool.
Multi-host TPU slice node pool: GKE atomically scales up the node pool from zero to the number of nodes required to satisfy the TPU topology. For example, with a TPU node pool with a machine type ct5lp-hightpu-4t
and a topology of 16x16
, the node pool contains 64 nodes. The GKE autoscaler ensures that this node pool has exactly 0 or 64 nodes. When scaling back down, GKE evicts all scheduled pods, and drains the entire node pool to zero. To learn more how to create a multi-host TPU slice node pool, see Create a node pool.
Because cluster autoscaler prefers expanding the least expensive node pools, when your workloads and resource availability allow it, cluster autoscaler adds Spot VMs when scaling up.
However, even though cluster autoscaler prefers adding Spot VMs, this preference doesn't guarantee that the majority of your Pods will run on these types of VMs. Spot VMs can be preempted. Because of this preemption, Pods on Spot VMs are more likely to be evicted. When they're evicted, they only have 15 seconds to terminate.
For example, imagine a scenario where you have 10 Pods and a mixture of on-demand and Spot VMs:
To prioritize Spot VMs, and avoid the preceding scenario, we recommend that you use custom compute classes. Custom compute classes let you create priority rules that favor Spot VMs during scale-up by giving them higher priority than on-demand nodes. To further maximize the likelihood of your Pods running on nodes backed by Spot VMs, configure active migration.
The following example shows you one way to use custom compute classes to prioritize Spot VMs:
apiVersion: cloud.google.com/v1
kind: ComputeClass
metadata:
name: prefer-l4-spot
spec:
priorities:
- machineType: g2-standard-24
spot: true
gpu:
type: nvidia-l4
count: 2
- machineType: g2-standard-24
spot: false
gpu:
type: nvidia-l4
count: 2
nodePoolAutoCreation:
enabled: true
activeMigration:
optimizeRulePriority: true
In the preceding example, the priority rule declares a preference for creating nodes with the g2-standard-24
machine type and Spot VMs. If Spot VMs aren't available, then GKE uses on-demand VMs as a fallback option. This compute class also enables activeMigration
, enabling cluster autoscaler to migrate workloads to Spot VMs when the capacity becomes available.
If you can't use custom compute classes, add a node affinity, taint, or toleration. For example, the following node affinity rule declares a preference for scheduling Pods on nodes that are backed by Spot VMs (GKE automatically adds the cloud.google.com/gke-spot=true
label to these types of nodes):
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: cloud.google.com/gke-spot
operator: Equal
values:
- true
To learn more about using node affinities, taints, and tolerations to schedule Spot VMs, see the Running a GKE application on spot nodes with on-demand nodes as fallback blog.
ProvisioningRequest CRDA ProvisioningRequest is a namespaced custom resource that lets users request capacity for a group of Pods from the cluster autoscaler. This is particularly useful for applications with interconnected pods that must be scheduled together as a single unit.
Supported Provisioning ClassesThere are three supported ProvisioningClasses:
queued-provisioning.gke.io
: this GKE-specific class integrates with the Dynamic Workload Scheduler, lets you queue requests and have them fulfilled when resources become available. This is ideal for batch jobs or delay-tolerant workloads. See Deploy GPUs for batch and AI workloads with Dynamic Workload Scheduler to learn how to use queued provisioning in GKE. Supported from GKE version 1.28.3-gke.1098000 in Standard clusters and from GKE version 1.30.3-gke.1451000 in Autopilot clusters.
check-capacity.autoscaling.x-k8s.io
: this open-source class verifies the availability of resources before it attempts to schedule Pods. Supported from GKE version 1.30.2-gke.1468000.
best-effort-atomic.autoscaling.x-k8s.io
: this open-source class attempts to provision resources all Pods in the request together. If it is impossible to provision enough resources for all pods, no resources will be provisioned and the entire request will fail. Supported from GKE version 1.31.27.
To learn more about the CheckCapacity and BestEffortAtomicScaleUp classes, refer to the open-source documentation.
Limitations when using ProvisioningRequesttotal-max-nodes
: instead of limiting the maximum number of nodes (--max nodes
), use --total-max-nodes
to constrain the total resources that are consumed by your application.location-policy=ANY
: this setting allows your Pods to be scheduled in any available location, which can expedite provisioning and optimize resource utilization.A scale-up operation can fail due to node creation errors such as insufficient quota or IP address exhaustion. When these errors occur, the underlying Managed Instance Group (MIG) retries the operation after an initial five-minute backoff. If errors continue, this backoff period increases exponentially to a maximum of 30 minutes. During this time, the cluster autoscaler can still scale up other node pools in the cluster that aren't experiencing errors.
Additional informationYou can find more information about cluster autoscaler in the Autoscaling FAQ in the open-source Kubernetes project.
LimitationsCluster autoscaler has the following limitations:
PriorityClass
value below -10
. Learn more in How does Cluster Autoscaler work with Pod Priority and Preemption?eventResult
events with the reason scale.up.error.ip.space.exhausted
. You can add more IP addresses for nodes by expanding the primary subnet, or add new IP addresses for Pods using discontiguous multi-Pod CIDR. For more information, see Not enough free IP space for Pods.NoSchedule
flag set, and any Pods on those nodes are immediately evicted. To mitigate the sudden decrease in available resources, the autoscaler of the node pool might provision new nodes within the same node pool. These newly created nodes become available for scheduling, and evicted Pods are scheduled back onto them. Eventually, the entire node pool—including the newly provisioned nodes and their Pods—is deleted, which can lead to potential service interruptions. As a workaround, to prevent the autoscaler from provisioning new nodes during deletion, disable autoscaling on the node pool before you initiate deletion.whenUnsatisfiable
field is set to the DoNotSchedule
value. You can soften the spread requirements by setting the whenUnsatisfiable
field to the ScheduleAnyway
value.For troubleshooting advice, see the following pages:
What's nextRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4