Google Kubernetes Engine (GKE) provides a simple way for you to automatically deploy and manage the Compute Engine persistent disk Container Storage Interface (CSI) Driver in your clusters. The Compute Engine persistent disk CSI Driver is always enabled in Autopilot clusters and can't be disabled or edited. In Standard clusters, you must enable the Compute Engine persistent disk CSI Driver.
The Compute Engine persistent disk CSI Driver version is tied to GKE version numbers. The Compute Engine persistent disk CSI Driver version is typically the latest driver available at the time that the GKE version is released. The drivers update automatically when the cluster is upgraded to the latest GKE patch.
Note: Because the Compute Engine persistent disk CSI Driver and some of the other associated CSI components are deployed as separate containers, they incur resource usage (VM CPU, memory, and boot disk) on Kubernetes nodes. VM CPU usage is typically tens of millicores and memory usage is typically tens of MB. Boot disk usage is mostly incurred by the logs of the CSI driver and other system containers in the Deployment. BenefitsUsing the Compute Engine persistent disk CSI Driver provides the following benefits:
Before you start, make sure that you have performed the following tasks:
gcloud components update
. Note: For existing gcloud CLI installations, make sure to set the compute/region
property. If you use primarily zonal clusters, set the compute/zone
instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location
. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.To use the Compute Engine persistent disk CSI Driver, your clusters must be using the following versions:
In version 1.22 and later, CSI Migration is enabled. Existing volumes that use the gce-pd
provider are migrated to communicate through CSI drivers instead. No changes are required to any StorageClass. The gce-pd
provider continues to not support features such as CMEK or volume snapshots. You must use the pd.csi.storage.gke.io
provider in the StorageClass to enable these features.
To use the Compute Engine persistent disk CSI Driver with Workload Identity Federation for GKE, your Standard clusters must use the following versions:
Linux clusters: GKE version 1.18.10-gke.2100 or later, or 1.19.3-gke.2100 or later.
Windows clusters: GKE version 1.22.6-gke.300 or later, or 1.23.2-gke.300 or later.
For Autopilot clusters, the Compute Engine persistent disk CSI Driver is enabled by default and cannot be disabled or edited.
To create a Standard cluster with a version where the Compute Engine persistent disk CSI Driver is not automatically enabled, you can use the Google Cloud CLI or the Google Cloud console.
To enable the driver on cluster creation, complete the following steps:
gcloudgcloud container clusters create CLUSTER-NAME \
--addons=GcePersistentDiskCsiDriver \
--cluster-version=VERSION
Replace the following:
CLUSTER-NAME
: the name of your cluster.VERSION
: the GKE version number. You must select a version of 1.14 or higher to use this feature.For the full list of flags, see the gcloud container clusters create
documentation.
In the Google Cloud console, go to the Create a Kubernetes cluster page.
Configure the cluster as desired.
From the navigation pane, under Cluster, click Features.
Select the Enable Compute Engine persistent disk CSI Driver checkbox.
Click Create.
After you have enabled the Compute Engine persistent disk CSI Driver, you can use the driver in Kubernetes volumes using the driver and provisioner name: pd.csi.storage.gke.io
.
To enable the Compute Engine persistent disk CSI Driver in existing Standard clusters, use the Google Cloud CLI or the Google Cloud console.
To enable the driver on an existing cluster, complete the following steps:
gcloudgcloud container clusters update CLUSTER-NAME \
--update-addons=GcePersistentDiskCsiDriver=ENABLED
Replace CLUSTER-NAME
with the name of the existing cluster.
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Features, next to the Compute Engine persistent disk CSI Driver field, click edit Edit Compute Engine CSI driver.
Select the Enable Compute Engine Persistent Disk CSI Driver checkbox.
Click Save Changes.
You can disable the Compute Engine persistent disk CSI Driver for Standard clusters by using Google Cloud CLI or the Google Cloud console.
If you disable the driver, then any Pods currently using PersistentVolumes owned by the driver do not terminate. Any new Pods that try to use those PersistentVolumes also fail to start.
Note: As thegcePersistentDisk
volume type is migrated to the Compute Engine persistent disk CSI Driver in version 1.22 and later, if you disable the persistent disk CSI driver, the gcePersistentDisk
volume type also stops working. Warning: There is a known issue in GKE 1.21 and earlier if you are using the in-tree persistent disk driver and want to delete regional disks. A Compute Engine regional disk can leak when its related PersistentVolume
resource is deleted. This problem can be detected when your API call to delete the regional disk fails and returns an error code other than NotFound
. For more information, see this GitHub issue.
To disable the driver on an existing Standard cluster, complete the following steps:
gcloudgcloud container clusters update CLUSTER-NAME \
--update-addons=GcePersistentDiskCsiDriver=DISABLED
Replace CLUSTER-NAME
with the name of the existing cluster.
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Features, next to the Compute Engine persistent disk CSI Driver field, click edit Edit Compute Engine CSI driver.
Clear the Enable Compute Engine Persistent Disk CSI Driver checkbox.
Click Save Changes.
The following sections describe the typical process for using a Kubernetes volume backed by a CSI driver in GKE. These sections are specific to clusters using Linux.
Create a StorageClassAfter you enable the Compute Engine persistent disk CSI Driver, GKE automatically installs the following StorageClasses:
standard-rwo
, using balanced persistent diskpremium-rwo
, using SSD persistent diskFor Autopilot clusters, the default StorageClass is standard-rwo
, which uses the Compute Engine persistent disk CSI Driver. For Standard clusters, the default StorageClass uses the Kubernetes in-tree gcePersistentDisk
volume plugin.
You can find the name of your installed StorageClasses by running the following command:
kubectl get sc
You can also install a different StorageClass that uses the Compute Engine persistent disk CSI Driver by adding pd.csi.storage.gke.io
in the provisioner field.
For example, you could create a StorageClass using the following file named pd-example-class.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: pd-example
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
type: pd-balanced
You can specify the following Persistent Disk types in the type
parameter:
pd-balanced
pd-ssd
pd-standard
pd-extreme
(supported on GKE version 1.26 and later)If using pd-standard
or pd-extreme
, see Unsupported machine types for additional usage restrictions.
When you use the pd-extreme
option, you must also add the provisioned-iops-on-create
field to your manifest. This field must be set to the same value as the provisioned IOPS value that you specified when you created your persistent disk.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: pd-extreme-example
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
type: pd-extreme
provisioned-iops-on-create: '10000'
After creating the pd-example-class.yaml
file, run the following command:
kubectl create -f pd-example-class.yaml
Create a PersistentVolumeClaim
You can create a PersistentVolumeClaim that references the Compute Engine persistent disk CSI Driver's StorageClass.
The following file, named pvc-example.yaml
, uses the pre-installed storage class standard-rwo
:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard-rwo
resources:
requests:
storage: 6Gi
After creating the PersistentVolumeClaim manifest, run the following command:
kubectl create -f pvc-example.yaml
In the pre-installed StorageClass (standard-rwo
), volumeBindingMode
is set to WaitForFirstConsumer
. When volumeBindingMode
is set to WaitForFirstConsumer
, the PersistentVolume is not provisioned until a Pod referencing the PersistentVolumeClaim is scheduled. If volumeBindingMode
in the StorageClass is set to Immediate
(or it's omitted), a persistent-disk-backed PersistentVolume is provisioned after the PersistentVolumeClaim is created.
When using Pods with PersistentVolumes, we recommend that you use a workload controller (such as a Deployment or StatefulSet). While you would not typically use a standalone Pod, the following example uses one for simplicity.
The following example consumes the volume that you created in the previous section:
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: podpvc
readOnly: false
Using the Compute Engine persistent disk CSI Driver for Windows clusters
The following sections describe the typical process for using a Kubernetes volume backed by a CSI driver in GKE. These sections are specific to clusters using Windows.
Ensure that the:
Creating a StorageClass for Windows is very similar to Linux. You should be aware that the StorageClass installed by default will not work for Windows because the file system type is different. Compute Engine persistent disk CSI Driver for Windows requires NTFS
as the file system type.
For example, you could create a StorageClass using the following file named pd- windows-class.yaml
. Make sure to add csi.storage.k8s.io/fstype: NTFS
to the parameters list:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: pd-sc-windows
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
type: pd-balanced
csi.storage.k8s.io/fstype: NTFS
Create a PersistentVolumeClaim
After creating a StorageClass for Windows, you can now create a PersistentVolumeClaim that references that StorageClass:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc-windows
spec:
accessModes:
- ReadWriteOnce
storageClassName: pd-sc-windows
resources:
requests:
storage: 6Gi
Create a Pod that consumes the volume
The following example consumes the volume that you created in the previous task:
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
nodeSelector:
kubernetes.io/os: windows
containers:
- name: iis-server
image: mcr.microsoft.com/windows/servercore/iis
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: podpvc-windows
readOnly: false
Using the Compute Engine persistent disk CSI Driver with non-default filesystem types
The default filesystem type for Compute Engine persistent disks in GKE is ext4
. You can also use the xfs
storage type as long as your node image supports it. See Storage driver support for a list of supported drivers by node image.
The following example shows you how to use xfs
as the default filesystem type instead of ext4
with the Compute Engine persistent disk CSI Driver.
Save the following manifest as a YAML file named pd-xfs-class.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: xfs-class
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-balanced
csi.storage.k8s.io/fstype: xfs
volumeBindingMode: WaitForFirstConsumer
Apply the manifest:
kubectl apply -f pd-xfs-class.yaml
Save the following manifest as pd-xfs-pvc.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: xfs-pvc
spec:
storageClassName: xfs-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Apply the manifest:
kubectl apply -f pd-xfs-pvc.yaml
Save the following manifest as pd-xfs-pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: pd-xfs-pod
spec:
containers:
- name: cloud-sdk
image: google/cloud-sdk:slim
args: ["sleep","3600"]
volumeMounts:
- mountPath: /xfs
name: xfs-volume
volumes:
- name: xfs-volume
persistentVolumeClaim:
claimName: xfs-pvc
Apply the manifest:
kubectl apply -f pd-xfs-pod.yaml
Open a shell session in the Pod:
kubectl exec -it pd-xfs-pod -- /bin/bash
Look for xfs
partitions:
df -aTh --type=xfs
The output should be similar to the following:
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb xfs 30G 63M 30G 1% /xfs
You can use Cloud Logging to view events that relate to the Compute Engine persistent disk CSI Driver. Logs can help you troubleshoot issues.
For more information about Cloud Logging, see Viewing your GKE logs.
To view logs for the Compute Engine persistent disk CSI Driver, complete the following steps:
Go to the Cloud Logging page in the Google Cloud console.
Run the following query:
resource.type="k8s_container"
resource.labels.project_id="PROJECT_ID"
resource.labels.location="LOCATION"
resource.labels.cluster_name="CLUSTER_NAME"
resource.labels.namespace_name="kube-system"
resource.labels.container_name="gce-pd-driver"
Replace the following:
PROJECT_ID
: the name of your project.LOCATION
: the Compute Engine region or zone of the cluster.CLUSTER_NAME
: the name of your cluster.If you are using the C3 series machine family, the pd-standard
persistent disk type is not supported.
If you attempt to run a Pod on a machine, and the Pod uses an unsupported persistent disk type, you will see a warning message like the following emitted on the Pod:
AttachVolume.Attach failed for volume "pvc-d7397693-5097-4a70-9df0-b10204611053" : rpc error: code = Internal desc = unknown Attach error: failed when waiting for zonal op: operation operation-1681408439910-5f93b68c8803d-6606e4ed-b96be2e7 failed (UNSUPPORTED_OPERATION): [pd-standard] features are not compatible for creating instance.
If your cluster has multiple node pools with different machine families, you can use node taints and node affinity to limit where workloads can be scheduled. For example, you can use this approach to restrict a workload using pd-standard
from running on an unsupported machine family.
If you are using the pd-extreme
persistent disk type, you need to ensure that your disk is attached to a VM instance with a suitable machine shape. To learn more, refer to Machine shape support.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4