A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/how-to/compact-placement below:

Define compact placement for GKE nodes | Google Kubernetes Engine (GKE)

You can control whether your Google Kubernetes Engine (GKE) nodes are physically located relative to each other within a zone by using a compact placement policy.

Overview

When you create node pools and workloads in a GKE cluster, you can define a compact placement policy, which specifies that these nodes or workloads should be placed in closer physical proximity to each other within a zone. Having nodes closer to each other can reduce network latency between nodes, which can be especially useful for tightly-coupled batch workloads.

Use compact placement with GKE Autopilot

In Autopilot clusters, you can request compact placement for specific workloads by adding node selectors to your Pod specification. You can use the default Autopilot compact placement policy or an existing Compute Engine compact placement policy that uses the N2 machine series or the N2D machine series.

Limitations Enable a compact placement policy

To enable compact placement for GKE Autopilot, add a nodeSelector to the Pod specification with the following keys:

The following example Pod specification enables compact placement with a custom compact placement policy:

apiVersion: v1
kind: Pod
metadata:
# lines omitted for clarity
spec:
  nodeSelector:
    cloud.google.com/gke-placement-group: "placement-group-1"
    cloud.google.com/compute-class: "Balanced"
    cloud.google.com/placement-policy-name: PLACEMENT_POLICY_NAME

Replace PLACEMENT_POLICY_NAME with the name of an existing Compute Engine compact placement policy. To use the default compact placement policy for Autopilot, omit the cloud.google.com/placement-policy-name line.

Use a custom compact placement policy without placement groups

To use a custom compact placement policy without placement groups, you must add the cloud.google.com/placement-policy-name node selector to your Pod specification.

This approach might be useful if you want to use a JobSet to schedule each job separately, but you also want to use a custom compact placement policy to place the nodes that run the same job closer to each other.

Because JobSet doesn't support specifying different node selectors for each job, you can't use JobSet with placement groups in this scenario. However, you can use JobSet's built-in support for exclusive topologies to achieve the same effect.

The following example Pod specification enables compact placement with a custom compact placement policy for a JobSet workload:

apiVersion: jobset.x-k8s.io/v1alpha2
kind: JobSet
metadata:
  name: my-jobset
  annotations:
    alpha.jobset.sigs.k8s.io/exclusive-topology: cloud.google.com/gke-nodepool
spec:
 replicatedJobs:
    - name: my-job
      template:
        spec:
          # lines omitted for clarity
          template:
            spec:
              nodeSelector:
                cloud.google.com/placement-policy-name: PLACEMENT_POLICY_NAME
                cloud.google.com/machine-family: "n2"
              # lines omitted for clarity

Replace PLACEMENT_POLICY_NAME with the name of an existing Compute Engine compact placement policy.

Use compact placement with GKE Standard Limitations

Compact placement in GKE Standard node pools has the following limitations:

Create a compact placement policy

To create compact placement policies, in the Google Cloud CLI, you specify the placement-type=COMPACT option during node pool or cluster creation. With this setting, GKE attempts to place nodes within a node pool in closer physical proximity to each other.

To use an existing resource policy in your cluster, specify the location of your custom policy for the placement-policy flag during node pool or cluster creation. This enables the flexibility of using reserved placements, multiple node pools with the same placement policy, and other advanced placement options. However, it also requires more manual operations than specifying the --placement-type=COMPACT flag. For example, you need to create, delete, and maintain your custom resource policies. Make sure that the maximum number of VM instances is respected across all node pools using the resource policy. If this limit is reached while some of your node pools haven't reach their maximum size, adding any more nodes will fail.

If you don't specify the placement-type and placement-policy flags, then by default there are no requirements on node placement.

Create a compact placement policy in a new cluster

When you create a new cluster, you can specify a compact placement policy that will be applied to the default node pool. Any subsequent node pools that you create for the cluster, you will need to specify whether to apply compact placement.

To create a new cluster where the default node pool has a compact placement policy applied, use the following command:

gcloud container clusters create CLUSTER_NAME \
    --machine-type MACHINE_TYPE \
    --placement-type COMPACT \
    --max-surge-upgrade 0 \
    --max-unavailable-upgrade MAX_UNAVAILABLE

Replace the following:

Create a compact placement policy on an existing cluster

On an existing cluster, you can create a node pool that has a compact placement policy applied.

To create a node pool that has a compact placement policy applied, use the following command:

gcloud container node-pools create NODEPOOL_NAME \
    --machine-type MACHINE_TYPE \
    --cluster CLUSTER_NAME \
    --placement-type COMPACT \
    --max-surge-upgrade 0 \
    --max-unavailable-upgrade MAX_UNAVAILABLE

Replace the following:

You can manually create a resource policy and use it in multiple node pools.

  1. Create the resource policy in the cluster Google Cloud region:

    gcloud compute resource-policies create group-placement POLICY_NAME \
        --region REGION \
        --collocation collocated
    

    Replace the following:

  2. Create a node pool using the custom resource policy:

    gcloud container node-pools create NODEPOOL_NAME \
        --machine-type MACHINE_TYPE \
        --cluster CLUSTER_NAME \
        --placement-policy POLICY_NAME \
        --max-surge-upgrade 0 \
        --max-unavailable-upgrade MAX_UNAVAILABLE
    

    Replace the following:

Use a Compute Engine reservation with a compact placement policy

Reservations help you guarantee that hardware is available in a specified zone, reducing the risk of node pool creation failure caused by insufficient hardware.

  1. Create a reservation that specifies a compact placement policy:

    gcloud compute reservations create RESERVATION_NAME \
        --vm-count MACHINE_COUNT \
        --machine-type MACHINE_TYPE \
        --resource-policies policy=POLICY_NAME \
        --zone ZONE \
        --require-specific-reservation
    

    Replace the following:

  2. Create a node pool by specifying both the compact placement policy and the reservation you created in the previous step:

    gcloud container node-pools create NODEPOOL_NAME \
        --machine-type MACHINE_TYPE \
        --cluster CLUSTER_NAME \
        --placement-policy POLICY_NAME \
        --reservation-affinity specific \
        --reservation RESERVATION_NAME \
        --max-surge-upgrade 0 \
        --max-unavailable-upgrade MAX_UNAVAILABLE
    

Replace the following:

Create a workload on nodes that use compact placement

To run workloads on dedicated nodes that use compact placement, you can use several Kubernetes mechanisms, such as assigning pods to nodes and preventing scheduling unwanted pods on a group of nodes to achieve this.

In the following example, we add a taint to the dedicated nodes and add a corresponding toleration and affinity to the Pods.

  1. Add a taint to nodes in the node pool that has a compact placement policy:

    kubectl taint nodes -l cloud.google.com/gke-nodepool=NODEPOOL_NAME dedicated-pool=NODEPOOL_NAME:NoSchedule
    
  2. In the workload definition, specify the necessary toleration and a node affinity. Here's an example with a single Pod:

    apiVersion: v1
    kind: Pod
    metadata:
      ...
    spec:
      ...
      tolerations:
      - key: dedicated-pool
        operator: "Equal"
        value: "NODEPOOL_NAME"
        effect: "NoSchedule"
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: dedicated-pool
                operator: In
                values:
                - NODEPOOL_NAME
    

In some locations, it might not be possible to create a large node pool using a compact placement policy. To limit the size of such node pools to what's necessary, you should consider creating a node pool per workload requiring compact placement.

Use compact placement for node auto-provisioning

With node auto-provisioning, GKE automatically provisions node pools based on cluster resource demand. For more information, see Using node auto-provisioning.

To enable compact placement for node auto-provisioning, add a nodeSelector to the Pod specification like in the following example:

apiVersion: v1
kind: Pod
metadata:
# lines omitted for clarity
spec:
  nodeSelector:
    cloud.google.com/gke-placement-group: PLACEMENT_GROUP_IDENTIFIER
    cloud.google.com/machine-family: MACHINE_FAMILY
    cloud.google.com/placement-policy-name: PLACEMENT_POLICY_NAME
# lines omitted for clarity

Replace the following:

You can omit the cloud.google.com/machine-family key if the Pod configuration already defines a machine type supported with compact placement. For example, if the Pod specification includes nvidia.com/gpu and the cluster is configured to use A100 GPUs, you don't need to include the cloud.google.com/machine-family key.

The following example is a Pod specification that defines nvidia.com/gpu request and the cluster is configured to use A100 GPUs. This Pod spec doesn't include the cloud.google.com/machine-family key:

  apiVersion: v1
  kind: Pod
  metadata:
    ...
  spec:
    ...
    nodeSelector:
      cloud.google.com/gke-placement-group: PLACEMENT_GROUP_IDENTIFIER
      cloud.google.com/gke-accelerator: "nvidia-tesla-a100"
    resources:
      limits:
        nvidia.com/gpu: 2

To learn more, see how to configure Pods to consume GPUs.

Optimize placement group size

Because GKE finds the best placement for smaller deployments, we recommend you instruct GKE to avoid running different type of Pods in the same placement group. Add a toleration key with the cloud.google.com/gke-placement-group key and the compact placement identifier you defined.

The following example is a Pod specification that defines a Pod toleration with compact placement:

apiVersion: v1
kind: Pod
metadata:
  ...
spec:
  ...
  tolerations:
  - key: cloud.google.com/gke-placement-group
    operator: "Equal"
    value: PLACEMENT_GROUP_IDENTIFIER
    effect: "NoSchedule"

For more information about node auto-provisioning with Pod toleration, see Workload separation

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4