A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/how-to/provisioningrequest below:

Run a large-scale workload with flex-start with queued provisioning | Google Kubernetes Engine (GKE)

Note: Flex-start with queued provisioning supports new flags that are part of the flex-start preview launch.

This page shows you how to optimize GPU obtainability for large-scale batch and AI workloads with GPUs using flex-start with queued provisioning powered by Dynamic Workload Scheduler.

Before reading this page, ensure that you're familiar with the following:

This guide is intended for Machine learning (ML) engineers, Platform admins and operators, and for Data and AI specialists who are interested in using Kubernetes container orchestration capabilities for running batch workloads. For more information about common roles and example tasks that we reference in Google Cloud content, see Common GKE user roles and tasks.

How flex-start with queued provisioning works

With flex-start with queued provisioning, GKE allocates all requested resources at the same time. Flex-start with queued provisioning uses the following tools:

To use flex-start with queued provisioning, you have to add the --flex-start and --enable-queued-provisioning flags when you create the node pool.

Best practice:

Use flex-start with queued provisioning for large-scale batch and AI workloads when your workloads meet the following criteria:

For smaller workloads that can run on a single node, use flex-start. For more information about GPU provisioning in GKE, see Obtain accelerators for AI workloads.

Before you begin

Before you start, make sure that you have performed the following tasks:

Use node pools with flex-start with queued provisioning

This section applies to Standard clusters only.

You can use any of the following methods to designate that flex-start with queued provisioning can work with specific node pools in your cluster:

Create a node pool

Create a node pool that has flex-start with queued provisioning enabled by using the gcloud CLI:

gcloud container node-pools create NODEPOOL_NAME \
    --cluster=CLUSTER_NAME \
    --location=LOCATION \
    --enable-queued-provisioning \
    --accelerator type=GPU_TYPE,count=AMOUNT,gpu-driver-version=DRIVER_VERSION \
    --machine-type=MACHINE_TYPE \
    --flex-start \
    --enable-autoscaling  \
    --num-nodes=0   \
    --total-max-nodes TOTAL_MAX_NODES  \
    --location-policy=ANY  \
    --reservation-affinity=none  \
    --no-enable-autorepair

Replace the following:

Optionally, you can use the following flags:

This command creates a node pool with the following configuration:

Enable node auto-provisioning to create node pools for flex-start with queued provisioning

You can use node auto-provisioning to manage node pools for flex-start with queued provisioning for clusters running version 1.29.2-gke.1553000 or later. When you enable node auto-provisioning, GKE creates node pools with the required resources for the associated workload.

To enable node auto-provisioning, consider the following settings and complete the steps in Configure GPU limits:

Run your batch and AI workloads with flex-start with queued provisioning

To run batch workloads with flex-start with queued provisioning use any of the following configurations:

Best practice:

Use Kueue to run your batch and AI workloads with flex-start with queued provisioning.

Flex-start with queued provisioning for Jobs with Kueue

The following sections show you how to configure the flex-start with queued provisioning for Jobs with Kueue:

This section uses the samples in the dws-examples directory from the ai-on-gke repository. We have published the samples in the dws-examples directory under the Apache2 license.

You need to have administrator permissions to install Kueue. To gain them, make sure you are granted the IAM role roles/container.admin. To find out more about GKE IAM roles, see Create IAM allow policies guide.

Prepare your environment
  1. In Cloud Shell, run the following command:

    git clone https://github.com/GoogleCloudPlatform/ai-on-gke
    cd ai-on-gke/tutorials-and-examples/workflow-orchestration/dws-examples
    
  2. Install the latest Kueue version in your cluster:

    VERSION=KUEUE_VERSION
    kubectl apply --server-side -f https://github.com/kubernetes-sigs/kueue/releases/download/$VERSION/manifests.yaml
    

    Replace KUEUE_VERSION with the latest Kueue version.

If you use Kueue in version earlier than 0.7.0, change the Kueue feature gate configuration by setting the ProvisioningACC feature gate to true. See Kueue's feature gates for more detailed explanation and default gate values. For more information about Kueue installation, see Installation.

Create the Kueue resources for the Dynamic Workload Scheduler node pool only setup

With the following manifest, you create a cluster-level queue named dws-cluster-queue and the LocalQueue namespace named dws-local-queue. Jobs that refer to dws-cluster-queue queue in this namespace use flex-start with queued provisioning to get the GPU resources.

This cluster's queue has high quota limits and only the flex-start with queued provisioning integration is enabled. For more information about Kueue APIs and how to set up limits, see Kueue concepts.

Deploy the LocalQueue:

kubectl create -f ./dws-queues.yaml

The output is similar to the following:

resourceflavor.kueue.x-k8s.io/default-flavor created
admissioncheck.kueue.x-k8s.io/dws-prov created
provisioningrequestconfig.kueue.x-k8s.io/dws-config created
clusterqueue.kueue.x-k8s.io/dws-cluster-queue created
localqueue.kueue.x-k8s.io/dws-local-queue created

If you want to run Jobs that use flex-start with queued provisioning in other namespaces, you can create additional LocalQueues using the preceding template.

Run your Job

In the following manifest, the sample Job uses flex-start with queued provisioning:

This manifest includes the following fields that are relevant for the flex-start with queued provisioning configuration:

  1. Run your Job:

    kubectl create -f ./job.yaml
    

    The output is similar to the following:

    job.batch/sample-job created
    
  2. Check the status of your Job:

    kubectl describe job sample-job
    

    The output is similar to the following:

    Events:
      Type    Reason            Age    From                        Message
      ----    ------            ----   ----                        -------
      Normal  Suspended         5m17s  job-controller              Job suspended
      Normal  CreatedWorkload   5m17s  batch/job-kueue-controller  Created Workload: default/job-sample-job-7f173
      Normal  Started           3m27s  batch/job-kueue-controller  Admitted by clusterQueue dws-cluster-queue
      Normal  SuccessfulCreate  3m27s  job-controller              Created pod: sample-job-9qsfd
      Normal  Resumed           3m27s  job-controller              Job resumed
      Normal  Completed         12s    job-controller              Job completed
    

The flex-start with queued provisioning with Kueue integration also supports other workload types available in the open source ecosystem, like the following:

For more information about this support, see Kueue's batch user.

Create the Kueue resources for Reservation and Dynamic Workload Scheduler node pool setup

With the following manifest, you create two ResourceFlavors tied to two different node pools: reservation-nodepool and dws-nodepool. The name of these node pools are only exemplary names. Modify these names according to your node pool configuration. Additionally, with the ClusterQueue configuration, incoming Jobs try to use reservation-nodepool, and if there is no capacity then these Jobs use Dynamic Workload Scheduler to get the GPU resources.

This cluster's queue has high quota limits and only the flex-start with queued provisioning integration is enabled. For more information about Kueue APIs and how to set up limits, see Kueue concepts.

Deploy the manifest using the following command:

kubectl create -f ./dws_and_reservation.yaml

The output is similar to the following:

resourceflavor.kueue.x-k8s.io/reservation created
resourceflavor.kueue.x-k8s.io/dws created
clusterqueue.kueue.x-k8s.io/cluster-queue created
localqueue.kueue.x-k8s.io/user-queue created
admissioncheck.kueue.x-k8s.io/dws-prov created
provisioningrequestconfig.kueue.x-k8s.io/dws-config created
Run your Job

Contrary to the preceding setup, this manifest does not include the nodeSelector field because it's filled by Kueue, depending on the free capacity in the ClusterQueue.

  1. Run your Job:

    kubectl create -f ./job-without-node-selector.yaml
    

    The output is similar to the following:

    job.batch/sample-job-v8xwm created
    

To identify which node pool your Job uses, you need to find out what ResourceFlavor your Job uses.

Troubleshooting

For more information about Kueue's troubleshooting, see Troubleshooting Provisioning Request in Kueue.

Flex-start with queued provisioning for Jobs without Kueue Define a ProvisioningRequest object

Create a request through the Provisioning Request for each Job. Flex-start with queued provisioning doesn't start the Pods, it only provisions the nodes.

  1. Create the following provisioning-request.yaml manifest:

    Standard
    apiVersion: v1
    kind: PodTemplate
    metadata:
      name: POD_TEMPLATE_NAME
      namespace: NAMESPACE_NAME
      labels:
        cloud.google.com/apply-warden-policies: "true"
    template:
      spec:
        nodeSelector:
          cloud.google.com/gke-nodepool: NODEPOOL_NAME
          cloud.google.com/gke-flex-start: "true"
        tolerations:
          - key: "nvidia.com/gpu"
            operator: "Exists"
            effect: "NoSchedule"
        containers:
          - name: pi
            image: perl
            command: ["/bin/sh"]
            resources:
              limits:
                cpu: "700m"
                nvidia.com/gpu: 1
              requests:
                cpu: "700m"
                nvidia.com/gpu: 1
        restartPolicy: Never
    ---
    apiVersion: autoscaling.x-k8s.io/API_VERSION
    kind: ProvisioningRequest
    metadata:
      name: PROVISIONING_REQUEST_NAME
      namespace: NAMESPACE_NAME
    spec:
      provisioningClassName: queued-provisioning.gke.io
      parameters:
        maxRunDurationSeconds: "MAX_RUN_DURATION_SECONDS"
      podSets:
      - count: COUNT
        podTemplateRef:
          name: POD_TEMPLATE_NAME
    

    Replace the following:

    GKE might apply validations and mutations to Pods during their creation. The cloud.google.com/apply-warden-policies label allows GKE to apply the same validations and mutations to PodTemplate objects. This label is necessary for GKE to calculate node resource requirements for your Pods. The flex-start with queued provisioning integration supports only one PodSet spec. If you want to mix different Pod templates, use the template that requests the most resources. Mixing different machine types, such as VMs with different GPU types, is not supported.

    Node auto-provisioning
    apiVersion: v1
    kind: PodTemplate
    metadata:
      name: POD_TEMPLATE_NAME
      namespace: NAMESPACE_NAME
      labels:
        cloud.google.com/apply-warden-policies: "true"
    template:
      spec:
        nodeSelector:
          cloud.google.com/gke-accelerator: GPU_TYPE
          cloud.google.com/gke-flex-start: "true"
        tolerations:
          - key: "nvidia.com/gpu"
            operator: "Exists"
            effect: "NoSchedule"
        containers:
          - name: pi
            image: perl
            command: ["/bin/sh"]
            resources:
              limits:
                cpu: "700m"
                nvidia.com/gpu: 1
              requests:
                cpu: "700m"
                nvidia.com/gpu: 1
        restartPolicy: Never
    ---
    apiVersion: autoscaling.x-k8s.io/API_VERSION
    kind: ProvisioningRequest
    metadata:
      name: PROVISIONING_REQUEST_NAME
      namespace: NAMESPACE_NAME
    spec:
      provisioningClassName: queued-provisioning.gke.io
      parameters:
        maxRunDurationSeconds: "MAX_RUN_DURATION_SECONDS"
      podSets:
      - count: COUNT
        podTemplateRef:
          name: POD_TEMPLATE_NAME
    

    Replace the following:

    GKE might apply validations and mutations to Pods during their creation. The cloud.google.com/apply-warden-policies label allows GKE to apply the same validations and mutations to PodTemplate objects. This label is necessary for GKE to calculate node resource requirements for your Pods.

  2. Apply the manifest:

    kubectl apply -f provisioning-request.yaml
    
Configure the Pods

This section uses Kubernetes Jobs to configure the Pods. However, you can also use a Kubernetes JobSet or any other framework like Kubeflow, Ray, or custom controllers. In the Job spec, link the Pods to the ProvisioningRequest using the following annotations:

apiVersion: batch/v1
kind: Job
spec:
  template:
    metadata:
      annotations:
        autoscaling.x-k8s.io/consume-provisioning-request: PROVISIONING_REQUEST_NAME
        autoscaling.x-k8s.io/provisioning-class-name: "queued-provisioning.gke.io"
    spec:
      ...

The Pod annotation key consume-provisioning-request defines which ProvisioningRequest to consume. GKE uses the consume-provisioning-request and provisioning-class-name annotations to do the following:

Observe the status of a Provisioning Request

The status of a Provisioning Request defines if a Pod can be scheduled or not. You can use Kubernetes watches to observe changes efficiently or other tooling you already use for tracking statuses of Kubernetes objects. The following table describes the possible status of a Provisioning Request request and each possible outcome:

Provisioning Request status Description Possible outcome Pending The request was not seen and processed yet. After processing, the request transitions to Accepted or Failed state. Accepted=true The request is accepted and is waiting for resources to be available. The request should transition to Provisioned state, if resources were found and nodes were provisioned or to Failed state if that was not possible. Provisioned=true The nodes are ready. You have 10 minutes to start the Pods to consume provisioned resources. After this time, the cluster autoscaler considers the nodes as not needed and removes them. Failed=true The nodes can't be provisioned due to errors. Failed=true is a terminal state. Troubleshoot the condition based on the information in the Reason and Message fields of the condition. Create and retry a new Provisioning Request request. Provisioned=false The nodes haven't been provisioned yet.

If Reason=NotProvisioned, this is a temporary state before all resources are available.

If Reason=QuotaExceeded, troubleshoot the condition based on this reason and the information in the Message field of the condition. You might need to request more quota. For more details, see Check if the Provisioning Request is limited by quota section. This Reason is only available with GKE version 1.29.2-gke.1181000 or later.

If Reason=ResourcePoolExhausted, and the Message contains Expected time is indefinite, either select a different zone or region, or adjust the requested resources.

Start the Pods

When the Provisioning Request request reaches the Provisioned=true status, you can run your Job to start the Pods. This avoids proliferation of unschedulable Pods for pending or failed requests, which can impact kube-scheduler and cluster autoscaler performance.

Alternatively, if you don't care about having unschedulable Pods, you can create Pods in parallel with the Provisioning Request request.

Cancel the Provisioning Request request

To cancel the request before it's provisioned, you can delete the ProvisioningRequest:

kubectl delete provreq PROVISIONING_REQUEST_NAME -n NAMESPACE

In most cases, deleting ProvisioningRequest stops nodes from being created. However, depending on timing, for example if nodes were already being provisioned, the nodes might still end up created. In these cases, the cluster autoscaler removes the nodes after 10 minutes if no Pods are created.

Troubleshoot quota issues

All VMs provisioned by Provisioning Request requests use preemptible quotas.

The number of ProvisioningRequests that are in Accepted state is limited by a dedicated quota. You configure the quota for each project, one quota configuration per region.

Check quota in the Google Cloud console

To check the name of the quota limit and current usage in the Google Cloud console, follow these steps:

  1. Go to the Quotas page in the Google Cloud console:

    Go to Quotas

  2. In the filter_list Filter box, select the Metric property, enter active_resize_requests, and press Enter.

The default value is 100. To increase the quota, follow the steps listed in Request a quota adjustment.

Check if the Provisioning Request request is limited by quota

If your Provisioning Request request is taking longer than expected to be fulfilled, check that the request isn't limited by quota. You might need to request more quota.

For clusters running version 1.29.2-gke.1181000 or later, check whether specific quota limitations are preventing your request from being fulfilled:

kubectl describe provreq PROVISIONING_REQUEST_NAME \
    --namespace NAMESPACE

The output is similar the following:

…
Last Transition Time:  2024-01-03T13:56:08Z
    Message:               Quota 'NVIDIA_P4_GPUS' exceeded. Limit: 1.0 in region europe-west4.
    Observed Generation:   1
    Reason:                QuotaExceeded
    Status:                False
    Type:                  Provisioned
…

In this example, GKE can't deploy nodes because there isn't enough quota in the region of europe-west4.

Migrate node pools from queued provisioning to flex-start

To migrate existing node pools that were created by using the --enable-queued-provisioning flag to flex-start, do the following steps:

  1. Make sure that the node pool is empty:

    kubectl get nodes -l cloud.google.com/gke-nodepool=NODEPOOL_NAME
    
  2. Update the node pool to flex-start:

    gcloud container node-pools update NODEPOOL_NAME \
      --cluster=CLUSTER_NAME --flex-start
    

This operation does the following:

All nodes on clusters running on 1.32.2-gke.1652000 or later, the minimum version for flex-start nodes, use short-lived upgrades.

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4