A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/hyperdisk-storage-pools below:

Optimize storage performance and cost with Hyperdisk Storage Pools | Google Kubernetes Engine (GKE)

This page describes how your Google Kubernetes Engine (GKE) clusters can pool and share storage capacity, throughput, and IOPS across disks by using GKE Hyperdisk Storage Pools.

Note: Hyperdisk support is based on the machine type of your nodes. For the most up-to-date information, see Machine type support in the Compute Engine documentation. Overview

Storage pools logically group physical storage devices, allowing you to segment your resources. You can provision Google Cloud Hyperdisks within these storage pools essentially creating Hyperdisk Storage Pools. Hyperdisk Storage Pools offer pre-provisioned capacity, throughput, and IOPS that your GKE cluster disks can share.

You can use Hyperdisk Storage Pools to manage your storage resources more efficiently and cost-effectively. This lets you take advantage of efficiency technologies such as deduplication and thin provisioning.

In this guide, you use the us-east4-c zone to create the Hyperdisk Balanced Storage Pool and other resources.

Planning considerations

Consider the following requirements and limitations before provisioning and consuming your Hyperdisk Storage Pool.

Creating and managing storage pools

Following requirements and limitations apply:

Provisioning boot disks in storage pools

Following requirements and limitations apply:

Provisioning attached disk in storage pools

Following requirements and limitations apply:

Quota

When creating a Hyperdisk Storage Pool, you can configure it with either standard or advanced provisioning for capacity and performance. If you want to increase the quota for capacity, throughput, or IOPS, request higher quota for the relevant quota filter.

For more information, see View the quotas for your project and Request a quota adjustment.

Use the following quota filters for Hyperdisk Balanced Storage Pools:

Use the following quota filters for Hyperdisk Throughput Storage Pools:

For example, if you want to increase the total capacity for Hyperdisk Balanced Storage Pools with Advanced capacity provisioning, per project and per region, request a higher quota for the following filter:

hdb-storage-pool-total-advanced-capacity-per-project-region.

Pricing

See Hyperdisk Storage Pools pricing for pricing details.

Before you begin

Before you start, make sure that you have performed the following tasks:

Create a Hyperdisk Storage Pool

Create a Hyperdisk Storage Pool before you provision boot disks or attached disks in that storage pool. For more information, see Create Hyperdisk Storage Pools.

Make sure you create storage pools in one of the supported zones.

For example, use the following command to create a Hyperdisk Balanced Storage Pool with Advanced capacity and Advanced performance, and provision 10 TB capacity, 10000 IOPS/s and 1024 MBps throughput in the us-east4-c zone:

export PROJECT_ID=PROJECT_ID
export ZONE=us-east4-c
gcloud compute storage-pools create pool-$ZONE \
    --provisioned-capacity=10tb --storage-pool-type=hyperdisk-balanced \
    --zone=$ZONE --project=$PROJECT_ID --capacity-provisioning-type=advanced \
    --performance-provisioning-type=advanced --provisioned-iops=10000 \
    --provisioned-throughput=1024

Replace PROJECT_ID with your Google Cloud account project ID.

Inspect storage pool zones

To inspect a cluster's default node zones, run the following command:

gcloud container clusters describe CLUSTER_NAME  | yq '.locations'

Replace CLUSTER_NAME with the name of the cluster you'd be creating while provisioning a boot disk or attached disk.

Provision a GKE boot disk in a Hyperdisk Storage Pool

You can provision a GKE boot disk in a Hyperdisk Storage Pool when doing any of the following:

When creating a cluster

To create a GKE cluster with boot disks provisioned in a storage pool, use the following command:

gcloud container clusters create CLUSTER_NAME \
    --disk-type=DISK_TYPE --storage-pools=STORAGE_POOL,[...] \
    --node-locations=ZONE,[...] --machine-type=MACHINE_TYPE \
    --location=CONTROL_PLANE_LOCATION

Replace the following:

When creating a node pool

To create a GKE node pool with boot disks provisioned in a storage pool, use the following command:

gcloud container node-pools create NODE_POOL_NAME \
    --disk-type=DISK_TYPE --storage-pools=STORAGE_POOL,[...] \
    --node-locations=ZONE,[...] --machine-type=MACHINE_TYPE \
    --location=CONTROL_PLANE_LOCATION --cluster=CLUSTER_NAME

Replace the following::

When updating a node pool

You can use an update command to add or replace storage pools in a node pool. This command can't be used to remove storage pools from a node pool.

To update a GKE node pool so that its boot disks are provisioned in a storage pool, use the following command.

gcloud container node-pools update NODE_POOL_NAME \
  --storage-pools=STORAGE_POOL,[...] \
  --location=CONTROL_PLANE_LOCATION --cluster=CLUSTER_NAME

This change requires recreating the nodes, which can cause disruption to your running workloads. For details about this specific change, find the corresponding row in the manual changes that recreate the nodes using a node upgrade strategy without respecting maintenance policies table. To learn more about node updates, see Planning for node update disruptions.

Caution: GKE immediately begins recreating the nodes for this change using the node upgrade strategy, regardless of active maintenance policies. GKE depends on resource availability for the change. Disabling node auto-upgrades doesn't prevent this change. Ensure that your workloads running on the nodes are prepared for disruption before you initiate this change. Provision a GKE attached disk in a Hyperdisk Storage Pool

In this section:

Create a GKE cluster

Before you begin, review the considerations for provisioning an attached disk.

Autopilot

To create an Autopilot cluster using the gcloud CLI, see Create an Autopilot cluster.

Example:

gcloud container clusters create-auto CLUSTER_NAME --location=CONTROL_PLANE_LOCATION

Replace the following:

To select a supported machine type, you specify the cloud.google.com/compute-class: Performance nodeSelector while creating a Deployment. For a list of Compute Engine machine series available with the Performance compute class, see Supported machine series.

Standard

To create a Standard Zonal cluster using the gcloud CLI, see Creating a zonal cluster.

To create a Standard Regional cluster using the gcloud CLI, see Creating a regional cluster.

Example:

gcloud container clusters create CLUSTER_NAME --location=CONTROL_PLANE_LOCATION --project=PROJECT_ID --machine-type=MACHINE_TYPE --disk-type="DISK_TYPE"

Replace the following:

Create a StorageClass

In Kubernetes, to indicate that you want your PV to be created inside a storage pool, use a StorageClass. To learn more, see StorageClasses.

To create a new StorageClass with the throughput or IOPS level you want:

Each Hyperdisk type has default values for performance determined by the initial disk size provisioned. When creating a StorageClass, you can optionally specify the following parameters depending on your Hyperdisk type. If you omit these parameters, GKE uses the capacity based disk type defaults.

Parameter Hyperdisk Type Usage provisioned-throughput-on-create Hyperdisk Balanced, Hyperdisk Throughput Express the throughput value in MiB/s using the "Mi" qualifier; for example, if your required throughput is 250 MiB/s, specify "250Mi" when creating the StorageClass. provisioned-iops-on-create Hyperdisk Balanced, Hyperdisk IOPS The IOPS value should be expressed without any qualifiers; for example, if you require 7,000 IOPS, specify "7000" when creating the StorageClass.

For guidance on allowable values for throughput or IOPS, see Plan the performance level for your Hyperdisk volume.

Use the following manifest to create and apply a StorageClass named storage-pools-sc for dynamically provisioning a PV in the storage pool projects/my-project/zones/us-east4-c/storagePools/pool-us-east4-c:

kubectl apply -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: storage-pools-sc
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  type: hyperdisk-balanced
  provisioned-throughput-on-create: "140Mi"
  provisioned-iops-on-create: "3000"
  storage-pools: projects/my-project/zones/us-east4-c/storagePools/pool-us-east4-c
EOF

By utilizing the volumeBindingMode: WaitForFirstConsumer in this StorageClass, the binding and provisioning of a PVC is delayed until a Pod using the PVC is created. This approach ensures that the PV is not provisioned prematurely, and there is zone matching between the PV, and the Pod consuming it. If their zones don't match, the Pod remains in a Pending state.

Create a PersistentVolumeClaim (PVC)

Create a PVC that references the storage-pools-sc StorageClass that you created.

Use the following manifest to create a PVC named my-pvc, with 2048 GiB as the target storage capacity for the Hyperdisk Balanced volume:

kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  storageClassName: storage-pools-sc
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2048Gi
EOF
Create a Deployment that uses the PVC Best practice:

When using Pods with PersistentVolumes, use a workload controller such as a Deployment or a StatefulSet.

To ensure that Pods can be scheduled on a node pool with a machine series supporting Hyperdisk Balanced, configure a Deployment with the cloud.google.com/machine-family node selector. For more information, see machine type support for Hyperdisks. You use c3 machine series in the following sample Deployment.

Create and apply the following manifest to configure a Pod for deploying a Postgres web server using the PVC created in the previous section:

Autopilot

On Autopilot clusters, specify the cloud.google.com/compute-class: Performance nodeSelector to provision a Hyperdisk Balanced volume. For more information, see Request a dedicated node for a Pod.

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      nodeSelector:
        cloud.google.com/machine-family: c3
        cloud.google.com/compute-class: Performance
      containers:
      - name: postgres
        image: postgres:14-alpine
        args: [ "sleep", "3600" ]
        volumeMounts:
        - name: sdk-volume
          mountPath: /usr/share/data/
      volumes:
      - name: sdk-volume
        persistentVolumeClaim:
          claimName: my-pvc
EOF
Standard

On Standard clusters without node auto-provisioning enabled, make sure a node pool with the specified machine series is up and running before creating the Deployment. Otherwise the Pod fails to schedule.

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      nodeSelector:
        cloud.google.com/machine-family: c3
      containers:
      - name: postgres
        image: postgres:14-alpine
        args: [ "sleep", "3600" ]
        volumeMounts:
        - name: sdk-volume
          mountPath: /usr/share/data/
      volumes:
      - name: sdk-volume
        persistentVolumeClaim:
          claimName: my-pvc
EOF

Confirm that the Deployment was successfully created:

  kubectl get deployment

It might take a few minutes for Hyperdisk instances to complete provisioning and display a READY status.

Confirm if the attached disk is provisioned
  1. Check if your PVC named my-pvc has been successfully bound to a PV:

    kubectl get pvc my-pvc
    

    The output is similar to the following:

    
    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
    my-pvc        Bound    pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6   2Ti        RWO            storage-pools-sc   2m24s
    
  2. Check if the volume has been provisioned as specified in your StorageClass and PVC:

    gcloud compute storage-pools list-disks pool-us-east4-c --zone=us-east4-c
    

    The output is similar to the following:

    NAME                                      STATUS  PROVISIONED_IOPS  PROVISIONED_THROUGHPUT  SIZE_GB
    pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6  READY   3000              140                     2048
    
Snapshot and restore attached disks in storage pools

Moving disks in or out of a storage pool is not permitted. To move a disk in or out of a storage pool, recreate the disk from a snapshot. For more information, see Change the disk type.

In this section:

Create a test file

To create and verify a test file:

  1. Get the Pod name of the Postgres Deployment:

    kubectl get pods -l app=postgres
    

    The output is similar to the following:

    NAME                         READY   STATUS    RESTARTS   AGE
    postgres-78fc84c9ff-77vx6   1/1     Running   0          44s
    
  2. Create a test file hello.txt in the Pod:

    kubectl exec postgres-78fc84c9ff-77vx6 \
      -- sh -c 'echo "Hello World!" > /usr/share/data/hello.txt'
    
  3. Verify that the test file is created:

    kubectl exec postgres-78fc84c9ff-77vx6 \
      -- sh -c 'cat /usr/share/data/hello.txt'
    Hello World!
    
Create a volume snapshot and delete test file

To create and verify a snapshot:

  1. Create a VolumeSnapshotClass that specifies how the snapshot of your volumes should be taken and managed:

    kubectl apply -f - <<EOF
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
      name: my-snapshotclass
    driver: pd.csi.storage.gke.io
    deletionPolicy: Delete
    EOF
    
  2. Create a VolumeSnapshot and take the snapshot from the volume that's bound to the my-pvc PersistentVolumeClaim:

    kubectl apply -f - <<EOF
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: my-snapshot
    spec:
      volumeSnapshotClassName: my-snapshotclass
      source:
        persistentVolumeClaimName: my-pvc
    EOF
    
  3. Verify that the volume snapshot content is created:

    kubectl get volumesnapshotcontents
    

    The output is similar to the following:

    NAME                                               READYTOUSE   RESTORESIZE     DELETIONPOLICY   DRIVER                  VOLUMESNAPSHOTCLASS   VOLUMESNAPSHOT   VOLUMESNAPSHOTNAMESPACE   AGE
    snapcontent-e778fde2-5f1c-4a42-a43d-7f9d41d093da   false        2199023255552   Delete           pd.csi.storage.gke.io   my-snapshotclass      my-snapshot      default                   33s
    
  4. Confirm that the snapshot is ready to use:

    kubectl get volumesnapshot \
      -o custom-columns='NAME:.metadata.name,READY:.status.readyToUse'
    

    The output is similar to the following:

    NAME          READY
    my-snapshot   true
    
  5. Delete the original test file hello.txt that was created in the Pod postgres-78fc84c9ff-77vx6:

    kubectl exec postgres-78fc84c9ff-77vx6 \
        -- sh -c 'rm /usr/share/data/hello.txt'
    
Restore the volume snapshot

To restore the volume snapshot and data, follow these steps:

  1. Create a new PVC that restores data from a snapshot, and ensures that the new volume is provisioned within the same storage pool (storage-pools-sc) as the original volume. Apply the following manifest:

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-restore
    spec:
      dataSource:
        name: my-snapshot
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io
      storageClassName: storage-pools-sc
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 2048Gi
    EOF
    
  2. Update the existing Deployment named postgres so that it uses the newly restored PVC you just created. Apply the following manifest:

    kubectl apply -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: postgres
    spec:
      selector:
        matchLabels:
          app: postgres
      template:
        metadata:
          labels:
            app: postgres
        spec:
          nodeSelector:
            cloud.google.com/machine-family: c3
          containers:
          - name: postgres
            image: google/cloud-sdk:slim
            args: [ "sleep", "3600" ]
            volumeMounts:
            - name: sdk-volume
              mountPath: /usr/share/data/
          volumes:
          - name: sdk-volume
            persistentVolumeClaim:
              claimName: pvc-restore
    EOF
    
  3. Get the name of the newly created Pod that is part of the postgres Deployment:

    kubectl get pods -l app=postgres
    

    The output is similar to the following:

    NAME                         READY   STATUS        RESTARTS   AGE
    postgres-59f89cfd8c-42qtj   1/1     Running       0          40s
    
  4. Verify that the hello.txt file, which was previously deleted, now exists in the new Pod (postgres-59f89cfd8c-42qtj) after restoring the volume from the snapshot:

    kubectl exec postgres-59f89cfd8c-42qtj \
     -- sh -c 'cat /usr/share/data/hello.txt'
    Hello World!
    

    This validates that the snapshot and restore process was successfully completed and that the data from the snapshot has been restored to the new PV that's accessible to the Pod.

  5. Confirm that the volume created from the snapshot is located within the your storage pool:

    kubectl get pvc pvc-restore
    

    The output is similar to the following:

    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
    pvc-restore   Bound    pvc-b287c387-bc51-4100-a00e-b5241d411c82   2Ti        RWO            storage-pools-sc   2m24s
    
  6. Check if the new volume is provisioned as specified in your StorageClass and PVC:

    gcloud compute storage-pools list-disks pool-us-east4-c --zone=us-east4-c
    

    The output is similar to the following where you can see the new volume pvc-b287c387-bc51-4100-a00e-b5241d411c82 provisioned in the same storage pool.

    
    NAME                                      STATUS  PROVISIONED_IOPS  PROVISIONED_THROUGHPUT  SIZE_GB
    pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6  READY   3000              140                     2048
    pvc-b287c387-bc51-4100-a00e-b5241d411c82  READY   3000              140                     2048
    

    This ensures that the restored volume benefits from the shared resources and capabilities of the pool.

Migrate existing volumes into a storage pool

Use snapshot and restore to migrate volumes that exist outside of a storage pool, into a storage pool.

Ensure that the following conditions are met:

After you restore from a snapshot into a new volume, you can delete the source PVC and PV.

Clean up

To avoid incurring charges to your Google Cloud account, delete the storage resources you created in this guide. First delete all the disks within the storage pool and then delete the storage pool.

Delete the boot disk

When you delete a node (by scaling down the node pool) or an entire node pool, the associated boot disks are automatically deleted. You can also delete the cluster to automatically delete the boot disks of all node pools within it.

For more information, see:

Delete the attached disk

To delete the attached disk provisioned in a Hyperdisk Storage Pool:

  1. Delete the Pod that uses the PVC:

    kubectl delete deployments postgres
    
  2. Delete the PVC that uses the Hyperdisk Storage Pool StorageClass.

    kubectl delete pvc my-pvc
    

    Confirm that the PVC pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 has been deleted:

    gcloud compute storage-pools list-disks pool-us-east4-c --zone=us-east4-c
    
Delete the Hyperdisk Storage Pool

Delete the Hyperdisk Storage Pool with the following command:

gcloud compute storage-pools delete pool-us-east4-c --zone=us-east4-c --project=my-project
What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4