A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd-raw below:

Provision and use Local SSD-backed raw block storage | Google Kubernetes Engine (GKE)

This page explains how to provision Local SSD storage on Google Kubernetes Engine (GKE) clusters, and how to configure workloads to consume data from Local SSD-backed raw block storage attached to nodes in your cluster.

Using this Local SSD option gives you more control over the underlying storage and lets you build your own node-level cache for Pods to deliver better performance for your applications. You can also customize this option by installing a file system on Local SSD disks by running a DaemonSet to configure RAID and format disks as needed.

Note: Local SSD volumes require machine type n1-standard-1 or larger; the default machine type, e2-medium is not supported. To learn more, see machine types in the Compute Engine documentation. Warning: We don't recommend using the Local SSD as raw block storage for your storage-optimized (Z3) VMs. Data in the Local SSD in Z3 nodes might not be available during maintenance events, and data recovery isn't assured after maintenance. Hence, we recommend using the Local SSD as ephemeral storage for your workloads running on Z3 VMs.

To learn more about Local SSD support for raw block access on GKE, see About local SSDs.

Before you begin

Before you start, make sure that you have performed the following tasks:

Create a cluster or node pool with Local SSD-backed raw block storage

Use the gcloud CLI with the --local-nvme-ssd-block option to create a cluster with Local SSD-backed raw block storage.

Note: To use Local SSD-backed raw block storage, you must manually configure the devices. Local SSD disks are not formatted for RAID. You are also responsible for ensuring capacity on nodes and handling potential disruptions from co-located workloads. If you use local PersistentVolume, scheduling is integrated. If you don't need data to persist across the Pod lifecycle, or if you don't have special performance requirements, use the --ephemeral-storage-local-ssd option which should fit most use cases.

The gcloud CLI command you run to create the cluster or node pool depends on which machine series generation the machine type you are using belongs to. For example, N1 and N2 machine types belong to a first and second generation machine series respectively, while C3 machine types belong to a third generation machine series.

Create a cluster with Local SSD 1st or 2nd Generation

If you use a machine type from a first or second generation machine series, you create your cluster by specifying the --local-nvme-ssd-block count=NUMBER_OF_DISKS option. The option specifies the number of Local SSD disks to attach to each node. The maximum number varies by machine type and region.

To create a cluster:

gcloud container clusters create CLUSTER_NAME \
    --local-nvme-ssd-block count=NUMBER_OF_DISKS \
    --machine-type=MACHINE_TYPE \
    --release-channel CHANNEL_NAME

Replace the following:

3rd or 4th Generation

If you use a machine type from a third or fourth generation machine series, use the --local-nvme-ssd-block option, without a count field, to create a cluster. GKE automatically provisions Local SSD capacity for your cluster based on the VM shape. The maximum number varies by machine type and region.

gcloud container clusters create CLUSTER_NAME \
    --machine-type=MACHINE_TYPE \
    --cluster-version CLUSTER_VERSION \
    --local-nvme-ssd-block

Replace the following:

Create a node pool with Local SSD 1st or 2nd Generation

To create a node pool that uses Local SSD disks for raw block access, run the following command:

gcloud container node-pools create POOL_NAME \
    --cluster=CLUSTER_NAME \
    --machine-type=MACHINE_TYPE \
    --local-nvme-ssd-block count=NUMBER_OF_DISKS

Replace the following:

3rd or 4th Generation

If you use a machine type from a third or fourth generation machine series, use the --local-nvme-ssd-block option, without a count field, to create a cluster:

gcloud container node-pools create POOL_NAME \
    --cluster=CLUSTER_NAME \
    --machine-type=MACHINE_TYPE \
    --node-version NODE_VERSION \
    --local-nvme-ssd-block

Replace the following:

Nodes in the node pool are created with a cloud.google.com/gke-local-nvme-ssd=true label. You can verify the labels by running the following command:

kubectl describe node NODE_NAME

For each Local SSD attached to the node pool, the host OS creates a symbolic link (symlink) to access the disk under an ordinal folder, and a symlink with a universally unique identifier (UUID). For example, if you create a node pool with three local SSDs using the --local-nvme-ssd-block option, the host OS creates the following symlinks for the disks:

Correspondingly, the host OS also creates the following symlinks with UUIDs for the disks:

This ensures that the disks can be accessed using a unique identifier.

Access Local SSD volumes

The following example shows how you can access Local SSD-backed raw block storage.

Local PersistentVolumes

Local SSD volumes can be mounted as Pods using PersistentVolumes.

You can create PersistentVolumes from Local SSD by manually creating a PersistentVolume, or by running the local volume static provisioner.

Limitations of local PersistentVolumes Note: When using release channels, auto-upgrade and auto-repair cannot be disabled. We do not recommend using local PersistentVolumes with clusters in a release channel. Caution: Clusters and node pools created with the Google Cloud console have node auto-upgrade enabled by default. To disable, refer to node auto-upgrades. Manually create the PersistentVolume

You can manually create a PersistentVolume for each Local SSD on each node in your cluster.

Use the nodeAffinity field in a PersistentVolume object to reference a Local SSD on a specific node. The following example shows the PersistentVolume specification for Local SSD on nodes running Linux:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: "example-local-pv"
spec:
  capacity:
    storage: 375Gi
  accessModes:
  - "ReadWriteOnce"
  persistentVolumeReclaimPolicy: "Retain"
  storageClassName: "local-storage"
  local:
    path: "/mnt/disks/ssd0"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: "kubernetes.io/hostname"
          operator: "In"
          values:
          - "gke-test-cluster-default-pool-926ddf80-f166"

In this example, the Local SSD disks are manually configured for RAID and formatted, then mounted at /mnt/disks/ssd0 on node gke-test-cluster-default-pool-926ddf80-f166. The nodeAffinity field is used to help assign workloads to nodes with Local SSDs that are manually configured for RAID. If you only have one node in your cluster or if you've configured RAID for all nodes, the nodeAffinity field is not needed.

The corresponding PersistenVolumeClaim specification looks like the following:

  kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name: ssd-local-claim
  spec:
    accessModes:
    - ReadWriteOnce
    storageClassName: local-storage
    resources:
      requests:
        storage: 37Gi

If you delete the PersistentVolume, you must manually erase the data from the disk.

Run the local volume static provisioner Note: The local volume static provisioner is a community project and not maintained or supported by Google. It also cannot be used with Windows.

You can create PersistentVolumes for Local SSD automatically using the local volume static provisioner. The provisioner is a DaemonSet that manages the Local SSD disks on each node, creates and deletes the PersistentVolumes for them, and cleans up the data on the Local SSD disks when the PersistentVolume is released.

To run the local volume static provisioner:

  1. Use a DaemonSet to configure RAID and format the disks:

    1. Download the gke-daemonset-raid-disks.yaml specification.
    2. Deploy the RAID disks DaemonSet. The DaemonSet sets a RAID 0 array on all Local SSD disks and formats the device to an ext4 filesystem.

      kubectl create -f gke-daemonset-raid-disks.yaml
      
  2. Download the gke-nvme-ssd-block-raid.yaml specification, and modify the specification's namespace fields as needed.

    The specification includes these resources:

  3. Deploy the provisioner:

    kubectl create -f gke-nvme-ssd-block-raid.yaml
    

    After the provisioner is running successfully, it creates a PersistentVolume object for the RAID Local SSD device in the cluster.

  4. Save the following PersistentVolumeClaim manifest as provisioner-pvc-example.yaml:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: PVC_NAME
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 50Gi
      storageClassName: nvme-ssd-block
    

    Replace PVC_NAME with the name of your PersistentVolumeClaim.

  5. Create the PersistentVolumeClaim:

    kubectl create -f provisioner-pvc-example.yaml
    
  6. Save the following Pod manifest as provisioner-pod-example.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: POD_NAME
    spec:
      containers:
      - name: "shell"
        image: "ubuntu:14.04"
        command: ["/bin/sh", "-c"]
        args: ["echo 'hello world' > /cache/test.txt && sleep 1 && cat /cache/test.txt && sleep 3600"]
        volumeMounts:
        - mountPath: /cache
          name: local-ssd-storage
      volumes:
      - name: local-ssd-storage
        persistentVolumeClaim:
          claimName: PVC_NAME
    

    Replace POD_NAME with the name of your Pod.

  7. Create the Pod:

    kubectl create -f provisioner-pod-example.yaml
    
Enable delayed volume binding Note: If you followed the instructions to run the local volume static provisioner, its manifest provides an appropriate StorageClass definition. If you use the local volume static provisioner, you do not need to perform the steps in this section.

For improved scheduling, we recommend that you also create a StorageClass with volumeBindingMode: WaitForFirstConsumer. This delays PersistentVolumeClaim binding until Pod scheduling, so that a Local SSD is chosen from an appropriate node that can run the Pod. This enhanced scheduling behavior considers Pod CPU and memory requests, node affinity, Pod affinity and anti-affinity, and multiple PersistentVolumeClaim requests, along with which nodes have available local SSDs, when selecting a node for a runnable Pod.

This example uses delayed volume binding mode:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: "local-nvme"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"

To create a StorageClass with delayed binding, save the YAML manifest to a local file and apply it to the cluster using the following command:

kubectl apply -f filename
Troubleshooting

For troubleshooting instructions, refer to Troubleshooting storage in GKE.

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4