The Compute Engine Persistent Disk CSI driver is the primary way for you to access Hyperdisk storage with Google Kubernetes Engine (GKE) clusters.
Note: Hyperdisk support is based on the machine type of your nodes. For the most up-to-date information, see Machine type support in the Compute Engine documentation. Before you beginBefore you start, make sure you have performed the following tasks:
gcloud components update
. Note: For existing gcloud CLI installations, make sure to set the compute/region
and compute/zone
properties. By setting default locations, you can avoid errors in gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location
.To use Hyperdisk volumes in GKE, your clusters must meet the following requirements:
This section provides an overview of creating a Hyperdisk volume backed by the Compute Engine CSI driver in GKE.
Note: When you create the PersistentVolumeClaim associated with the StorageClass, GKE automatically creates the underlying Google Cloud Hyperdisk backing storage and attaches the storage to a node. You don't need to separately create and attach Google Cloud Hyperdisk storage to your nodes. Create a StorageClassThe following Persistent Disk storage Type
fields are provided by the Compute Engine Persistent Disk CSI driver to support Hyperdisk:
hyperdisk-balanced
hyperdisk-throughput
hyperdisk-extreme
hyperdisk-ml
hyperdisk-balanced-high-availability
To create a new StorageClass with the throughput or IOPS level you want, use pd.csi.storage.gke.io
in the provisioner field, and specify one of the Hyperdisk storage types.
Each Hyperdisk type has default values for performance determined by the initial disk size provisioned. When creating the StorageClass, you can optionally specify the following parameters depending on your Hyperdisk type. If you omit these parameters, GKE uses the capacity based disk type defaults instead. For guidance on allowable values for throughput or IOPS, see Plan the performance level for your Hyperdisk volume.
Parameter Hyperdisk Type Usageprovisioned-throughput-on-create
Hyperdisk Balanced*, Hyperdisk Balanced High Availability, Hyperdisk Throughput Express the throughput value in MiB/s using the "Mi" qualifier; for example, if your required throughput is 250 MiB/s, specify "250Mi"
when creating the StorageClass. provisioned-iops-on-create
Hyperdisk Balanced, Hyperdisk Balanced High Availability, Hyperdisk Extreme The IOPS value should be expressed without any qualifiers; for example, if you require 7,000 IOPS, specify "7000"
when creating the StorageClass.
The following examples show how you can create a StorageClass for each Hyperdisk type:
Hyperdisk BalancedSave the following manifest in a file named hdb-example-class.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: balanced-storage
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
type: hyperdisk-balanced
provisioned-throughput-on-create: "250Mi"
provisioned-iops-on-create: "7000"
Create the StorageClass:
kubectl create -f hdb-example-class.yaml
Save the following manifest in a file named hdt-example-class.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: throughput-storage
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
type: hyperdisk-throughput
provisioned-throughput-on-create: "50Mi"
Create the StorageClass:
kubectl create -f hdt-example-class.yaml
Save the following manifest in a file named hdx-example-class.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: extreme-storage
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
type: hyperdisk-extreme
provisioned-iops-on-create: "50000"
Create the StorageClass:
kubectl create -f hdx-example-class.yaml
Save the following manifest in a file named hdb-ha-example-class.yaml
.
For zonal clusters, set the availability zones where you want to create the PersistentVolumes.
For regional clusters, you can choose to not set the allowedTopologies
field to create the PersistentVolumes in two randomly selected availability zones at the time of Pod scheduling.
For more information on supported zones, see hyperdisk regional availability.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: balanced-ha-storage
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
type: hyperdisk-balanced-high-availability
provisioned-throughput-on-create: "250Mi"
provisioned-iops-on-create: "7000"
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- ZONE1
- ZONE2
Create the StorageClass:
kubectl create -f hdb-ha-example-class.yaml
To find the name of the StorageClasses available in your cluster, run the following command:
kubectl get sc
Create a PersistentVolumeClaim
You can create a PersistentVolumeClaim that references the Compute Engine Persistent Disk CSI driver's StorageClass.
Hyperdisk BalancedIn this example, you specify the targeted storage capacity of the Hyperdisk Balanced volume as 20 GiB.
Save the following PersistentVolumeClaim manifest in a file named pvc-example.yaml
:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: balanced-storage
resources:
requests:
storage: 20Gi
Apply the PersistentVolumeClaim that references the StorageClass you created from the earlier example:
kubectl apply -f pvc-example.yaml
In this example, you specify the targeted storage capacity of the Hyperdisk Throughput volume as 2 TiB.
Save the following PersistentVolumeClaim manifest in a file named pvc-example.yaml
:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: throughput-storage
resources:
requests:
storage: 2Ti
Apply the PersistentVolumeClaim that references the StorageClass you created from the earlier example:
kubectl apply -f pvc-example.yaml
In this example, you specify the minimum storage capacity of the Hyperdisk Extreme volume as 64 GiB.
Save the following PersistentVolumeClaim manifest in a file named pvc-example.yaml
:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: extreme-storage
resources:
requests:
storage: 64Gi
Apply the PersistentVolumeClaim that references the StorageClass you created from the earlier example:
kubectl apply -f pvc-example.yaml
In this example, you specify the minimum storage capacity of the Hyperdisk Balanced High Availability volume as 20 GiB and the access mode as ReadWriteOnce
. Hyperdisk Balanced High Availability also supports access modes of ReadWriteMany
and ReadWriteOncePod
. For differences and use cases of each access mode, see Persistent Volume Access Modes.
Save the following PersistentVolumeClaim manifest in a file named pvc-example.yaml
:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: balanced-ha-storage
resources:
requests:
storage: 20Gi
Apply the PersistentVolumeClaim that references the StorageClass you created from the earlier example:
kubectl apply -f pvc-example.yaml
When using Pods with PersistentVolumes, we recommend that you use a workload controller (such as a Deployment or StatefulSet).
The following example creates a manifest that configures a Pod for deploying a Nginx web server using the PersistentVolumeClaim created in the previous section. Save the following example manifest as hyperdisk-example-deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: podpvc
readOnly: false
To create a Deployment based on the hyperdisk-example-deployment.yaml
manifest file, run the following command:
kubectl apply -f hyperdisk-example-deployment.yaml
Confirm the Deployment was successfully created:
kubectl get deployment
It might take a few minutes for Hyperdisk instances to complete provisioning. When the deployment completes provisioning, it reports a READY
status.
You can check the progress by monitoring your PersistentVolumeClaim status by running the following command:
kubectl get pvc
To create a new Hyperdisk volume from an existing Persistent Disk snapshot, use the Google Cloud console, the Google Cloud CLI, or the Compute Engine API. To learn how to create a Persistent Disk snapshot, see Creating and using volume snapshots.
ConsoleGo to the Disks page in the Google Cloud console.
Click Create Disk.
Under Disk Type, choose one of the following for disk type:
Under Disk source type, click Snapshot.
Select the name of the snapshot to restore.
Select the size of the new disk, in GiB. This number must be equal to or larger than the original source disk for the snapshot.
Set the Provisioned throughput or Provisioned IOPS you want for the disk, if different from the default values.
Click Create to create the Hyperdisk volume.
Run the gcloud compute disks create
command to create the Hyperdisk volume from a snapshot.
gcloud compute disks create DISK_NAME \
--size=SIZE \
--source-snapshot=SNAPSHOT_NAME \
--provisioned-throughput=TRHROUGHPUT_LIMIT \
--provisioned-iops=IOPS_LIMIT \
--type=hyperdisk-balanced
Replace the following:
DISK_NAME
: the name of the new disk.SIZE
: the size, in gibibytes (GiB) or tebibytes (TiB), of the new disk. For more information about capacity limitations, see Size and performance limits.SNAPSHOT_NAME
: the name of the snapshot being restored.THROUGHPUT_LIMIT
: Optional. For Hyperdisk Balanced disks, this is an integer that represents the throughput, measured in MiB/s, that the disk can reach. For more information about capacity limitations, see Size and performance limits.IOPS_LIMIT
: Optional. For Hyperdisk Balanced disks, this is the maximum number of IOPS that the disk can reach. For more information about capacity limitations, see Size and performance limits.gcloud compute disks create DISK_NAME \
--size=SIZE \
--source-snapshot=SNAPSHOT_NAME \
--provisioned-throughput=TRHROUGHPUT_LIMIT \
--type=hyperdisk-throughput
Replace the following:
DISK_NAME
: the name of the new disk.SIZE
: the size, in gibibytes (GiB or GB) or tebibytes (TiB or TB), of the new disk. For more information about capacity limitations, see Size and performance limits.SNAPSHOT_NAME
: the name of the snapshot being restored.THROUGHPUT_LIMIT
: Optional: For Hyperdisk Throughput disks, this is an integer that represents the throughput, measured in MiB/s, that the disk can reach. For more information about capacity limitations, see Size and performance limits.gcloud compute disks create DISK_NAME \
--size=SIZE \
--source-snapshot=SNAPSHOT_NAME \
--provisioned-iops=IOPS_LIMIT \
--type=hyperdisk-extreme
Replace the following:
DISK_NAME
: the name of the new disk.SIZE
: the size, in gibibytes (GiB or GB) or tebibytes (TiB or TB), of the new disk. For more information about capacity limitations, see Size and performance limits.SNAPSHOT_NAME
: the name of the snapshot being restored.IOPS_LIMIT
: Optional: For Hyperdisk Extreme disks, this is the maximum number of I/O operations per second that the disk can reach. For more information about capacity limitations, see Size and performance limits.gcloud compute disks create DISK_NAME \
--size=SIZE \
--region=REGION \
--replica-zones=('ZONE1', 'ZONE2') \
--source-snapshot=SNAPSHOT_NAME \
--provisioned-throughput=TRHROUGHPUT_LIMIT \
--provisioned-iops=IOPS_LIMIT \
--type=hyperdisk-balanced-high-availability
Replace the following:
DISK_NAME
: the name of the new disk.SIZE
: the size, in gibibytes (GiB) or tebibytes (TiB), of the new disk. Refer to the Compute Engine documentation for the latest capacity limitations.REGION
: the region of the new disk. Refer to the Compute Engine documentation for the latest regional availability.ZONE1
, ZONE2
: the zones within the region where the replicas will be located.SNAPSHOT_NAME
: the name of the snapshot being restored.THROUGHPUT_LIMIT
: Optional. For Hyperdisk Balanced High Availability disks, this is an integer that represents the throughput, measured in MiB/s, that the disk can reach. For more information about capacity limitations, see Size and performance limitsIOPS_LIMIT
: Optional. For Hyperdisk Balanced High Availability disks, this is the maximum number of IOPS that the disk can reach. For more information about capacity limitations, see Size and performance limits.To create a snapshot from a Hyperdisk volume, follow the same steps as creating a snapshot for a Persistent Disk volume:
Update the provisioned throughput or IOPS of an existing Hyperdisk volumeThis section covers how to modify provisioned performance for Hyperdisk volumes.
ThroughputUpdating the provisioned throughput is supported for Hyperdisk Balanced, Hyperdisk Balanced High Availability and Hyperdisk Throughput volumes only.
To update the provisioned throughput level of your Hyperdisk volume, follow the Google Cloud console, gcloud CLI, or Compute Engine API instructions in Changing the provisioned performance for a Hyperdisk volume.
You can change the provisioned throughput level (up to once every 4 hours) for a Hyperdisk volume after volume creation. New throughput levels might take up to 15 minutes to take effect. During the performance change, any performance SLA and SLO are not in effect. You can change the throughput level of an existing volume at any time, regardless of whether the disk is attached to a running instance or not.
The new throughput level you specify must adhere to the supported values for Hyperdisk Balanced, Hyperdisk Throughput and Hyperdisk Balanced High Availability volumes, respectively.
To update the provisioned throughput level for a Hyperdisk volume, you must identify the name of the Persistent Disk backing your PersistentVolumeClaim and PersistentVolume resources:
Go to the Object browser in the Google Cloud console.
Find the entry for your PersistentVolumeClaim object.
Click the Volume link .
Open the YAML tab of the associated PersistentVolume. Locate the CSI volumeHandle
value in this tab.
Note the last element of this handle (it should have a value like "pvc-XXXXX
"). This is the name of your PersistentVolumeClaim. You should also take note of the project and zone.
Updating the provisioned IOPS is supported for Hyperdisk Balanced, Hyperdisk Balanced High Availability and Hyperdisk Extreme volumes only.
To update the provisioned IOPS level of your Hyperdisk volume, follow the Google Cloud console, gcloud CLI, or Compute Engine API instructions in Changing the provisioned performance for a Hyperdisk volume.
You can change the provisioned IOPS level (up to once every 4 hours) for a Hyperdisk IOPS volume after volume creation. New IOPS levels might take up to 15 minutes to take effect. During the performance change, any performance SLA and SLO are not in effect. You can change the IOPS level of an existing volume at any time, regardless of whether the disk is attached to a running instance or not.
The new IOPS level you specify must adhere to the supported values for Hyperdisk Balanced or Hyperdisk Extreme volumes, respectively.
To update the provisioned IOPS level for a Hyperdisk volume, you must identify the name of the Persistent Disk backing your PersistentVolumeClaim and PersistentVolume resources:
Go to the Object browser in the Google Cloud console.
Find the entry for your PersistentVolumeClaim object.
Click the Volume link .
Open the YAML tab of the associated PersistentVolume. Locate the CSI volumeHandle
value in this tab.
Note the last element of this handle (it should have a value like "pvc-XXXXX
"). This is the name of your PersistentVolumeClaim. You should also take note of the project and zone.
To monitor the provisioned performance of your Hyperdisk volume, see Analyze provisioned IOPS and throughput in the Compute Engine documentation.
TroubleshootingThis section provides troubleshooting guidance to resolve issues with Hyperdisk volumes on GKE.
Cannot change performance or capacity: ratio out of rangeThe following error occurs when you attempt to change the provisioned performance level or capacity, but the performance level or capacity that you picked is outside of the range that is acceptable for the volume:
Requested provisioned throughput cannot be higher than <value>.
Requested provisioned throughput cannot be lower than <value>.
Requested provisioned throughput is too high for the requested disk size.
Requested provisioned throughput is too low for the requested disk size.
Requested disk size is too high for current provisioned throughput.
The throughput provisioned for Hyperdisk Throughput volumes must meet the following requirements:
To resolve this issue, correct the requested throughput or capacity to be within the allowable range and reissue the command.
Cannot change performance: rate limitedThe following error occurs when you attempt to change the provisioned performance level, but the performance level has already been changed within the last 4 hours:
Cannot update provisioned throughput due to being rate limited.
Cannot update provisioned iops due to being rate limited.
Hyperdisk Throughput and IOPS volumes can have their provisioned performance updated once every 4 hours. To resolve this issue, wait for the cool-down timer for the volume to elapse, and then reissue the command.
What's nextRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4