Stay organized with collections Save and categorize content based on your preferences.
This page explains how to create a PersistentVolume using existing persistent disks populated with data, and how to use the PersistentVolume in a Pod.
OverviewThere are two common scenarios which use a pre-existing persistent disk.
The examples in this page use existing Compute Engine persistent disks.
While ext4
is the default filesystem type, you can use a pre-existing persistent disk with the xfs
filesystem instead as long as your node image supports it. To use an xfs
disk, change spec.csi.fsType
to xfs
in the PersistentVolume manifest.
Windows does not support the ext4
filesystem type. You must use the NTFS
filesystem for Windows Server node pools. To use an NTFS
disk, change spec.csi.fsType
to NTFS
in the PersistentVolume manifest.
fsType
you specify in the PersistentVolume manifest, pods you create that consume the volumes will not start, and you will see errors in the Pod description. Similarly, Persistent Disk CSI volumes only support a single partition. Volumes created outside of GKE with multiple partitions cannot be used with this technique. Before you begin
Before you start, make sure you have performed the following tasks:
gcloud components update
. Note: For existing gcloud CLI installations, make sure to set the compute/region
and compute/zone
properties. By setting default locations, you can avoid errors in gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location
.For a container to access your pre-existing persistent disk, you'll need to do the following:
There are several ways to bind a PersistentVolumeClaim to a specific PersistentVolume. For example, the following YAML manifest creates a new PersistentVolume and PersistentVolumeClaim, and then binds the volume to the claim using the claimRef
defined on the PersistentVolume.
To bind a PersistentVolume to a PersistentVolumeClaim, the storageClassName
of the two resources must match, as well as capacity
, accessModes
, and volumeMode
. You can omit the storageClassName
, but you must specify ""
to prevent Kubernetes from using the default StorageClass.
The storageClassName
does not need to refer to an existing StorageClass object. If all you need is to bind the claim to a volume, you can use any name you want. However, if you need extra functionality configured by a StorageClass, like volume resizing, then storageClassName
must refer to an existing StorageClass object.
For more details, see the Kubernetes documentation on PersistentVolumes.
Note: The persistent disk must be in the same zone as the cluster nodes.Save the following YAML manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: PV_NAME
spec:
storageClassName: "STORAGE_CLASS_NAME"
capacity:
storage: DISK_SIZE
accessModes:
- ReadWriteOnce
claimRef:
name: PV_CLAIM_NAME
namespace: default
csi:
driver: pd.csi.storage.gke.io
volumeHandle: DISK_ID
fsType: FS_TYPE
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: default
name: PV_CLAIM_NAME
spec:
storageClassName: "STORAGE_CLASS_NAME"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: DISK_SIZE
Replace the following:
PV_NAME
: the name of your new PersistentVolume.STORAGE_CLASS_NAME
: the name of your new StorageClass.DISK_SIZE
: the size of your pre-existing persistent disk. For example, 500G
.PV_CLAIM_NAME
: the name of your new PersistentVolumeClaim.DISK_ID
: the identifier of your pre-existing persistent disk. The format is projects/{project_id}/zones/{zone_name}/disks/{disk_name}
for Zonal persistent disks, or projects/{project_id}/regions/{region_name}/disks/{disk_name}
for Regional persistent disks.FS_TYPE
: the filesystem type. You can leave this as the default (ext4
), or use xfs
. If your clusters use a Windows Server node pool, you must change this to NTFS
.To apply the configuration and create the PersistentVolume and PersistentVolumeClaim resources, run the following command:
kubectl apply -f FILE_PATH
Replace FILE_PATH
with the path to the YAML file.
After you create and bind the PersistentVolume and PersistentVolumeClaim, you can give a Pod's containers access to the volume by specifying values in the volumeMounts
field.
The following YAML configuration creates a new Pod and a container running an nginx
image, and then mounts the PersistentVolume on the Pod:
kind: Pod
apiVersion: v1
metadata:
name: POD_NAME
spec:
volumes:
- name: VOLUME_NAME
persistentVolumeClaim:
claimName: PV_CLAIM_NAME
containers:
- name: CONTAINER_NAME
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: VOLUME_NAME
Replace the following:
POD_NAME
: the name of your new Pod.VOLUME_NAME
: the name of the volume.PV_CLAIM_NAME
: the name of the PersistentVolumeClaim you created in the previous step.CONTAINER_NAME
: the name of your new container.Apply the configuration:
kubectl apply -f FILE_PATH
Replace FILE_PATH
with the path to the YAML file.
To verify that the volume was mounted, run the following command:
kubectl describe pods POD_NAME
In the output, check that the PersistentVolumeClaim was mounted:
...
Volumes:
VOLUME_NAME:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: PV_CLAIM_NAME
ReadOnly: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29s default-scheduler Successfully assigned default/POD_NAME to gke-cluster-1-default-pool-d5cde866-o4g4
Normal SuccessfulAttachVolume 21s attachdetach-controller AttachVolume.Attach succeeded for volume "PV_NAME"
Normal Pulling 19s kubelet Pulling image "nginx"
Normal Pulled 19s kubelet Successfully pulled image "nginx"
Normal Created 18s kubelet Created container CONTAINER_NAME
Normal Started 18s kubelet Started container CONTAINER_NAME
Using a pre-existing disk in a StatefulSet
You can use pre-existing Compute Engine persistent disks in a StatefulSet using PersistentVolumes. The StatefulSet automatically generates a PersistentVolumeClaim for each replica. You can predict the names of the generated PersistentVolumeClaims and bind them to the PersistentVolumes using claimRef
.
In the following example, you take two pre-existing persistent disks, create PersistentVolumes to use the disks, and then mount the volumes on a StatefulSet with two replicas in the default namespace.
Work out the names of the automatically generated PersistentVolumeClaims. The StatefulSet uses the following format for PersistentVolumeClaim names:
PVC_TEMPLATE_NAME-STATEFULSET_NAME-REPLICA_INDEX
Replace the following:
PVC_TEMPLATE_NAME
: the name of your new PersistentVolumeClaim template.STATEFULSET_NAME
: the name of your new StatefulSet.REPLICA_INDEX
: the index of the StatefulSet's replica. For this example, use 0
and 1
.Create the PersistentVolumes. You must create a PersistentVolume for each replica in the StatefulSet.
Save the following YAML manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ss-demo-0
spec:
storageClassName: "STORAGE_CLASS_NAME"
capacity:
storage: DISK1_SIZE
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: PVC_TEMPLATE_NAME-STATEFULSET_NAME-0
csi:
driver: pd.csi.storage.gke.io
volumeHandle: DISK1_ID
fsType: FS_TYPE
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ss-demo-1
spec:
storageClassName: "STORAGE_CLASS_NAME"
capacity:
storage: DISK2_SIZE
accessModes:
- ReadWriteOnce
claimRef:
namespace: default
name: PVC_TEMPLATE_NAME-STATEFULSET_NAME-1
csi:
driver: pd.csi.storage.gke.io
volumeHandle: DISK2_ID
fsType: FS_TYPE
Replace the following:
DISK1_SIZE and DISK2_SIZE
: the sizes of your pre-existing persistent disks.DISK1_ID and DISK2_ID
: the identifiers of your pre-existing persistent disks.PVC_TEMPLATE_NAME-STATEFULSET_NAME-0 and PVC_TEMPLATE_NAME-STATEFULSET_NAME-1
: the names of the automatically generated PersistentVolumeClaims in the format defined in the previous step.STORAGE_CLASS_NAME
: the name of your StorageClass.Apply the configuration:
kubectl apply -f FILE_PATH
Replace FILE_PATH
with the path to the YAML file.
Create a StatefulSet using the values you chose in step 1. Ensure that the storage you specify in the volumeClaimTemplates
is less than or equal to the total capacity of your PersistentVolumes.
Save the following YAML manifest:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: STATEFULSET_NAME
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: registry.k8s.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: PVC_TEMPLATE_NAME
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: PVC_TEMPLATE_NAME
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "STORAGE_CLASS_NAME"
resources:
requests:
storage: 100Gi
Replace the following:
STATEFULSET_NAME
: the name of your new StatefulSet.PVC_TEMPLATE_NAME
: the name of your new PersistentVolumeClaim template.STORAGE_CLASS_NAME
: the name of your StorageClass.Apply the configuration:
kubectl apply -f FILE_PATH
Replace FILE_PATH
with the path to the YAML file.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-07-02 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-07-02 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4