The Filestore CSI driver is the primary way for you to use Filestore instances with Google Kubernetes Engine (GKE). The Filestore CSI driver provides a fully-managed experience powered by the open source Google Cloud Filestore CSI driver.
The Filestore CSI driver version is tied to Kubernetes minor version numbers. The Filestore CSI driver version is typically the latest driver available at the time that the Kubernetes minor version is released. The drivers update automatically when the cluster is upgraded to the latest GKE patch.
Note: Because the Filestore CSI driver and some of the other associated CSI components are deployed as separate containers, they incur resource usage (VM CPU, memory, and boot disk) on Kubernetes nodes. VM CPU usage is typically tens of millicores and memory usage is typically tens of MiB. Boot disk usage is mostly incurred by the logs of the CSI driver and other system containers in the Deployment. BenefitsThe Filestore CSI driver provides the following benefits:
You have access to fully-managed NFS storage through the Kubernetes APIs (kubectl
).
You can use the GKE Filestore CSI driver to dynamically provision your PersistentVolumes.
You can use volume snapshots with the GKE Filestore CSI driver. CSI volume snapshots can be used to create Filestore backups.
A Filestore backup creates a differential copy of the file share, including all file data and metadata, and stores it separate from the instance. You can restore this copy to a new Filestore instance only. Restoring to an existing Filestore instance is not supported. You can use the CSI volume snapshot API to trigger Filestore backups, by adding a type:backup
field in the volume snapshot class.
You can use volume expansion with the GKE Filestore CSI driver. Volume expansion lets you resize your volume's capacity.
You can access existing Filestore instances by using pre-provisioned Filestore instances in Kubernetes workloads. You can also dynamically create or delete Filestore instances and use them in Kubernetes workloads with a StorageClass or a Deployment.
Supports Filestore multishares for GKE. This feature lets you create a Filestore instance and allocate multiple smaller NFS-mounted PersistentVolumes for it simultaneously across any number of GKE clusters.
Supports Basic HDD tier with a minimum capacity of 100 GiB.
To use the Filestore CSI driver, your clusters must use the proper GKE version number applicable to your service tier. Only the following service tiers are supported:
The Filestore CSI driver is supported for clusters using Linux only; Windows Server nodes are not supported.
The minimum instance size depends on the Filestore service tier you selected:
To learn more, see Service tiers.
Filestore uses the NFSv3 file system protocol on the Filestore instance by default and supports any NFSv3-compatible client.
The NFSv4.1 file system protocol on Filestore instance is supported for GKE version 1.33 or later.
Before you start, make sure that you have performed the following tasks:
gcloud components update
. Note: For existing gcloud CLI installations, make sure to set the compute/region
property. If you use primarily zonal clusters, set the compute/zone
instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location
. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.To enable the Filestore CSI driver CSI driver when creating a new Standard cluster, follow these steps with Google Cloud CLI or the Google Cloud console.
gcloudgcloud container clusters create CLUSTER_NAME \
--addons=GcpFilestoreCsiDriver \
--cluster-version=VERSION
Replace the following:
CLUSTER_NAME
: the name of your cluster.VERSION
: the GKE version number. You must select a supported version number to use this feature. See [#requirements] for details. Alternatively, you can use the --release-channel
flag and specify a release channel.In the Google Cloud console, go to the Create a Kubernetes cluster page.
Configure the cluster to fit your needs.
From the navigation pane, under Cluster, click Features.
Select the Enable Filestore CSI driver checkbox.
Click Create.
If you want to use Filestore on a Shared VPC network, see Enable the Filestore CSI driver on a new cluster with Shared VPC.
After you enable the Filestore CSI driver, you can use the driver in Kubernetes volumes using the driver and provisioner name: filestore.csi.storage.gke.io
.
To enable the Filestore CSI driver in existing clusters, use the Google Cloud CLI or the Google Cloud console.
To enable the driver on an existing cluster, complete the following steps:
gcloudgcloud container clusters update CLUSTER_NAME \
--update-addons=GcpFilestoreCsiDriver=ENABLED
Replace CLUSTER_NAME
with the name of the existing cluster.
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Features, next to the Filestore CSI driver field, click edit Edit Filestore CSI driver.
Select the Enable Filestore CSI driver checkbox.
Click Save Changes.
You can disable the Filestore CSI driver on an existing Autopilot or Standard cluster by using the Google Cloud CLI or the Google Cloud console.
Note: We strongly recommend not to manually disable the Filestore CSI driver on Autopilot clusters. Doing so causes any Pods using PersistentVolumes owned by the driver to fail to terminate. New pods attempting to use those PersistentVolumes will also fail to start. gcloudgcloud container clusters update CLUSTER_NAME \
--update-addons=GcpFilestoreCsiDriver=DISABLED \
--region REGION
Replace the following values:
CLUSTER_NAME
: the name of the existing cluster.REGION
: the region for your cluster (such as, us-central1
).In the Google Cloud console, go to the Google Kubernetes Engine menu.
In the cluster list, click the name of the cluster you want to modify.
Under Features, next to the Filestore CSI driver field, click edit Edit Filestore CSI driver.
Clear the Enable Filestore CSI driver checkbox.
Click Save Changes.
This section describes the typical process for using a Kubernetes volume to access pre-existing Filestore instances using Filestore CSI driver in GKE:
Create a PersistentVolume and a PersistentVolumeClaim to access the instanceCreate a manifest file like the one shown in the following example, and name it preprov-filestore.yaml
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: PV_NAME
spec:
storageClassName: ""
capacity:
storage: 1Ti
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
csi:
driver: filestore.csi.storage.gke.io
volumeHandle: "modeInstance/FILESTORE_INSTANCE_LOCATION/FILESTORE_INSTANCE_NAME/FILESTORE_SHARE_NAME"
volumeAttributes:
ip: FILESTORE_INSTANCE_IP
volume: FILESTORE_SHARE_NAME
protocol: FILESYSTEM_PROTOCOL
claimRef:
name: PVC_NAME
namespace: NAMESPACE
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: PVC_NAME
namespace: NAMESPACE
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Ti
To create the PersistentVolumeClaim
and PersistentVolume
resources based on the preprov-filestore.yaml
manifest file, run the following command:
kubectl apply -f preprov-filestore.yaml
To specify the NFSv4.1 file system protocol, set the protocol
field to NFS_V4_1
in the volumeAttributes
field of a PersistentVolume
object. To use the NFSv3 file system protocol, set the protocol
field to NFS_V3
or omit the protocol
field.
Then, proceed to create a Deployment that consumes the volume.
Create a volume using the Filestore CSI driverThe following sections describe the typical process for using a Kubernetes volume backed by a Filestore CSI driver in GKE:
After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances:
zonal-rwx
, using the Filestore zonal tier.enterprise-rwx
, using the Filestore enterprise tier, where each Kubernetes PersistentVolume maps to a Filestore instance.enterprise-multishare-rwx
, using the Filestore enterprise tier, where each Kubernetes PersistentVolume maps to a share of a given Filestore instance. To learn more, see Filestore multishares for Google Kubernetes Engine.standard-rwx
, using the Filestore basic HDD service tier.premium-rwx
, using the Filestore basic SSD service tier.Each StorageClass is only available in GKE clusters running in their respective supported GKE version numbers. For a list of supported versions required for each service tier, see Requirements.
You can find the name of your installed StorageClass
by running the following command:
kubectl get sc
You can also install a different StorageClass
that uses the Filestore CSI driver by adding filestore.csi.storage.gke.io
in the provisioner
field.
Filestore needs to know on which network to create the new instance. The automatically installed StorageClasses use the default network created for GKE clusters. If you have deleted this network or want to use a different network, you must create a new StorageClass as described in the following steps. Otherwise, the automatically installed StorageClasses won't work.
Save the following manifest as filestore-example-class.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: filestore-example
provisioner: filestore.csi.storage.gke.io
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
tier: standard
network: default
From the manifest, consider the following parameter configuration:
volumeBindingMode
to Immediate
allows the provisioning of the volume to begin immediately. This is possible because Filestore instances are accessible from any zone. Therefore GKE does not need to know the zone where the Pod is scheduled, in contrast with Compute Engine persistent disk. When set to WaitForFirstConsumer
, GKE begins provisioning only after the Pod is scheduled. For more information, see VolumeBindingMode.tier
parameter (for example, BASIC_HDD
, BASIC_SSD
, ZONAL
, or ENTERPRISE
).network
parameter can be used when provisioning Filestore instances on non-default VPCs. Non-default VPCs require special firewall rules to be set up.protocol
parameter can be used to set the file system protocol of the Filestore instance. It can take the following values: NFS_V3
(default) and NFS_V4_1
. The default protocol is NFS_V3
.To create a StorageClass
resource based on the filestore-example-class.yaml
manifest file, run the following command:
kubectl create -f filestore-example-class.yaml
If you want to use Filestore on a Shared VPC network, see Create a StorageClass when using the Filestore CSI driver with Shared VPC.
Use a PersistentVolumeClaim to access the volumeYou can create a PersistentVolumeClaim
resource that references the Filestore CSI driver's StorageClass
.
You can use either a pre-installed or custom StorageClass
.
The following example manifest file creates a PersistentVolumeClaim
that references the StorageClass
named filestore-example
.
Save the following manifest file as pvc-example.yaml
:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteMany
storageClassName: filestore-example
resources:
requests:
storage: 1Ti
To create a PersistentVolumeClaim
resource based on the pvc-example.yaml
manifest file, run the following command:
kubectl create -f pvc-example.yaml
The following example Deployment manifest consumes the PersistentVolume
resource named pvc-example.yaml
.
Multiple Pods can share the same PersistentVolumeClaim
resource.
Save the following manifest as filestore-example-deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: podpvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteMany
storageClassName: filestore-example
resources:
requests:
storage: 1Ti
To create a Deployment based on the filestore-example-deployment.yaml
manifest file, run the following command:
kubectl apply -f filestore-example-deployment.yaml
Confirm the Deployment was successfully created:
kubectl get deployment
It might take a while for Filestore instances to complete provisioning. Before that, deployments won't report a READY
status. You can check the progress by monitoring your PVC status by running the following command:
kubectl get pvc
You should see the PVC reach a BOUND
status, when the volume provisioning completes.
You can use labels to group related instances and store metadata about an instance. A label is a key-value pair that helps you organize your Filestore instances. You can attach a label to each resource, then filter the resources based on their labels.
You can provide labels by using the labels
key in StorageClass.parameters
. A Filestore instance can be labeled with information about what PersistentVolumeClaim
/PersistentVolume
the instance was created for. Custom label keys and values must comply with the label naming convention. See the Kubernetes storage class example to apply custom labels to the Filestore instance.
The Filestore CSI driver supports the NFSv4.1 file system protocol with GKE version 1.33 or later. In case of static provisioning, set the protocol
field to NFS_V4_1
in the volumeAttributes
field of a PersistentVolume
object.
For dynamic provisioning, set the protocol
field to NFS_V4_1
in the parameters of a StorageClass
object.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: enterprise-multishare-rwx
provisioner: filestore.csi.storage.gke.io
parameters:
tier: enterprise
multishare: "true"
instance-storageclass-label: "enterprise-multishare-rwx"
protocol: NFS_V4_1
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
You cannot mount the Filestore instance with the NFSv4.1
protocol with mountOptions
set to nfsvers=3
in the StorageClass
object.
Kubernetes uses fsGroup
to change permissions and ownership of the volume to match a user-requested fsGroup
in the Pod's SecurityContext. An fsGroup
is a supplemental group that applies to all containers in a Pod. You can apply an fsgroup to volumes provisioned by the Filestore CSI driver.
Filestore supports IP-based access control rules for volumes. This feature is available on GKE clusters running version 1.29.5 or later.
This feature allows administrators to specify which IP address ranges are allowed to access a Filestore instance provisioned dynamically through GKE. This enhances security by restricting access to only authorized clients, especially in scenarios where the GKE cluster's IP range is too broad, potentially exposing the Filestore instance to unauthorized users or applications.
These rules can be configured directly through the Filestore API, or through the Filestore CSI driver when a volume is created. You can provide the selected configuration in JSON format in the StorageClass using the nfs-export-options-on-create
parameter.
The following example manifest shows how to specify the configuration:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: filestore-example
provisioner: filestore.csi.storage.gke.io
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
tier: "enterprise"
nfs-export-options-on-create: '[
{
"accessMode": "READ_WRITE",
"ipRanges": [
"10.0.0.0/24"
],
"squashMode": "ROOT_SQUASH",
"anonUid": "1003",
"anonGid": "1003"
},
{
"accessMode": "READ_WRITE",
"ipRanges": [
"10.0.0.0/28"
],
"squashMode": "NO_ROOT_SQUASH"
}
]'
Note: Use the node primary IP range instead of the Pod IP in the nfs-export-options-on-create
parameter. Security options
Filestore IP access rules simplify the configuration of shared file storage permissions for your GKE workloads. However, understanding how it manages file ownership and access requires grasping a few key concepts:
NFS and user mappings NFS (Network File System) is the protocol used by Filestore. It works by mapping users on client systems (your GKE Pods) to users on the Filestore server. If a file on the server is owned by user ID 1003, and a client connects with user ID 1003, they'll have access to the file.
Root squashing and anonUid
:
Root Squashing ROOT_SQUASH
is a security feature that prevents clients from accessing the Filestore instance with full root privileges. When root squashing is enabled, root users on client systems are mapped to a non-privileged user specified by the anonUid
setting.
No Root Squashing (NO_ROOT_SQUASH
) allows clients to access the Filestore instance with full root privileges, which is convenient for initial setup but less secure for regular operations.
Initial Setup and Permissions: By default, a new Filestore instance is owned entirely by the root user. If you enable root squashing without first setting up permissions for other users, you'll lose access. This is why you need at least one NFS export rule with NO_ROOT_SQUASH
to initially configure access for other users and groups.
READ_WRITE
permissions and allows NO_ROOT_SQUASH
access. Use this access to create directories, set permissions, and assign ownership as needed.ROOT_SQUASH
) to enhance security. Note that after a volume is created, you can only modify the access rules through the Filestore API.fsGroup
in your Pod security contexts to manage group ownership of shared volumes. Make sure not to overlap your setting with the ROOT_SQUASH
mode. Doing so returns an Access denied
error message.This section covers how to use a Filestore instance on a Shared VPC network from a service project.
To set up your clusters with a Shared VPC network, follow these steps:
HostServiceAgent
user role binding for the service project's GKE service account.To enable the Filestore CSI driver on a new cluster with Shared VPC, follow these steps:
Verify the usable subnets and secondary ranges. When creating a cluster, you must specify a subnet and the secondary IP address ranges to be used for the cluster's Pods and Service.
gcloud container subnets list-usable \
--project=SERVICE_PROJECT_ID \
--network-project=HOST_PROJECT_ID
The output is similar to the following:
PROJECT REGION NETWORK SUBNET RANGE
HOST_PROJECT_ID us-central1 shared-net tier-1 10.0.4.0/22
┌──────────────────────┬───────────────┬─────────────────────────────┐
│ SECONDARY_RANGE_NAME │ IP_CIDR_RANGE │ STATUS │
├──────────────────────┼───────────────┼─────────────────────────────┤
│ tier-1-pods │ 10.4.0.0/14 │ usable for pods or services │
│ tier-1-services │ 10.0.32.0/20 │ usable for pods or services │
└──────────────────────┴───────────────┴─────────────────────────────┘
Create a GKE cluster. The following examples show how you can use gcloud CLI to create an Autopilot or Standard cluster configured for Shared VPC. The following examples use the network, subnet, and range names from Creating a network and two subnets.
Autopilotgcloud container clusters create-auto tier-1-cluster \
--project=SERVICE_PROJECT_ID \
--region=COMPUTE_REGION \
--network=projects/HOST_PROJECT_ID/global/networks/NETWORK_NAME \
--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNET_NAME \
--cluster-secondary-range-name=tier-1-pods \
--services-secondary-range-name=tier-1-services
Standard
gcloud container clusters create tier-1-cluster \
--project=SERVICE_PROJECT_ID \
--zone=COMPUTE_REGION \
--enable-ip-alias \
--network=projects/HOST_PROJECT_ID/global/networks/NETWORK_NAME \
--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNET_NAME \
--cluster-secondary-range-name=tier-1-pods \
--services-secondary-range-name=tier-1-services \
--addons=GcpFilestoreCsiDriver
Create firewall rules to allow communication between nodes, Pods, and Services in your cluster. The following example shows how you can create a firewall rule named my-shared-net-rule-2
.
gcloud compute firewall-rules create my-shared-net-rule-2 \
--project HOST_PROJECT_ID \
--network=NETWORK_NAME \
--allow=tcp,udp \
--direction=INGRESS \
--source-ranges=10.0.4.0/22,10.4.0.0/14,10.0.32.0/20
In the example, the source ranges IP values come from the previous step where you verified the usable subnets and secondary ranges.
The following example shows how you can create a StorageClass when using the Filestore CSI driver with Shared VPC:
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: filestore-sharedvpc-example
provisioner: filestore.csi.storage.gke.io
parameters:
network: "projects/HOST_PROJECT_ID/global/networks/SHARED_VPC_NAME"
connect-mode: PRIVATE_SERVICE_ACCESS
reserved-ip-range: RESERVED_IP_RANGE_NAME
allowVolumeExpansion: true
EOF
Replace the following:
HOST_PROJECT_ID
: the ID or name of the host project of the Shared VPC network.SHARED_VPC_NAME
: the name of the Shared VPC network you created earlier.RESERVED_IP_RANGE_NAME
: the name of the specific reserved IP address range to provision Filestore instance in. This field is optional. If a reserved IP address range is specified, it must be a named address range instead of a direct CIDR value.If you want to provision a volume backed by Filestore multishares on GKE clusters running version 1.23 or later, see Optimize storage with Filestore multishares for GKE.
Reconnect Filestore single share volumesIf you are using Filestore with the basic HDD, basic SSD, or enterprise (single share) tier, you can follow these instructions to reconnect your existing Filestore instance to your GKE workloads.
Find the details of your pre-provisioned Filestore instance by following the instructions in Getting information about a specific instance.
Redeploy your PersistentVolume specification. In the volumeAttributes
field, modify the following fields to use the same values as your Filestore instance from step 1:
ip
: Modify this value to the pre-provisioned Filestore instance IP address.volume
: Modify this value to the pre-provisioned Filestore instance's share name. In the claimRef
make sure you reference the same PersistentVolumeClaim in step 2.Redeploy your PersistentVolumeClaim specification.
Check the binding status of your PersistentVolumeClaim and PersistentVolume by running kubectl get pvc
.
Redeploy your Pod specification and ensure that your Pod is able to access the Filestore share again.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4