A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/compute/docs/disks/sharing-disks-between-vms below:

Share disks between instances | Compute Engine Documentation

You can access the same disk from multiple virtual machine (VM) instances by attaching the disk to each instance. You can attach a disk in read-only mode or multi-writer mode to an instance.

With read-only mode, multiple instances can only read data from the disk. None of the instances can write to the disk. Sharing a disk in read-only mode between instances is less expensive than having copies of the same data on multiple disks.

With multi-writer mode, multiple instances can read and write to the same disk. This is useful for highly-available (HA) shared file systems and databases like SQL Server Failover Cluster Infrastructure (FCI).

You can share a zonal disk only between instances in the same zone. Regional disks can be shared only with instances in the same zones as the disk's replicas.

There are no additional costs associated with sharing a disk between instances. Compute Engine instances don't have to use the same machine type to share a disk, but each instance must use a machine type that supports disk sharing.

This document discusses multi-writer and read-only disk sharing in Compute Engine, including the supported disk types and performance considerations.

Before you begin Enable disk sharing

You can attach an existing Hyperdisk or Persistent Disk volume to multiple instances. However, for Hyperdisk volumes, you must first put the disk in multi-writer or read-only mode by setting its access mode.

A Hyperdisk volume's access mode is a property that determines how instances can access the disk.

The available access modes are as follows:

Support for each access mode varies by Hyperdisk type, as stated in the following table. You can't set the access mode for Hyperdisk Throughput volumes.

Hyperdisk type Supported access modes Hyperdisk Balanced
Hyperdisk Balanced High Availability Hyperdisk Extreme Hyperdisk ML Hyperdisk Throughput

For disks that can be shared between instances, you can set the access mode at or after disk creation. For instructions on setting the access mode, see set the disk's access mode.

Read-only mode for Hyperdisk and Persistent Disk

This section discusses sharing a single disk in read-only mode between multiple instances.

Note: A disk's read-only setting applies to all instances that the disk is attached to. You can't attach a disk in read-write mode to one instance and attach the same disk in read-only mode to another instance. Supported disk types for read-only mode

You can attach these disk types to multiple instances in read-only mode:

Performance in read-only mode

Attaching a disk in read-only mode to multiple instances doesn't affect the disk's performance. Each instance can still reach the maximum disk performance possible for the instance's machine type.

Limitations for sharing disks in read-only mode How to share a disk in read-only mode between instances

If you're not using Hyperdisk ML, attach the disk to multiple instances by following the instructions in Attach a non-boot disk to an instance.

To attach a Hyperdisk ML volume in read-only mode to multiple instances, you must first set the disk's access mode to read-only mode. After you set the access mode, attach the Hyperdisk ML volume to your instances.

Multi-writer mode for Hyperdisk

Disks in multi-writer mode are suitable for use cases like the following:

If your primary goal is shared file storage among compute instances, consider one of the following options:

Supported Hyperdisk and machine types for multi-writer mode

You can use Hyperdisk Balanced, Hyperdisk Balanced High Availability, and Hyperdisk Extreme volumes (Preview) in multi-writer mode. You can attach a single Hyperdisk Balanced or Hyperdisk Balanced High Availability volume in multi-writer mode to at most 8 instances. You can attach a single Hyperdisk Extreme volume in multi-writer mode (Preview) to at most 16 instances. You can't attach volumes in multi-writer mode to bare metal instances.

Hyperdisk Balanced supports multi-writer mode for the following machine types:

Hyperdisk Balanced High Availability supports multi-writer mode for the following machine types:

Hyperdisk Extreme supports multi-writer mode (Preview) for the following machine types:

Supported file systems for multi-writer mode Warning: If you use single-instance file systems, such as EXT4, XFS, or NTFS, on a disk in multi-writer mode, you might experience data loss if multiple VMs access the disk at the same time. To mitigate this issue, you can use a clustering software that ensures exclusive access to a single VM at a time, such as SQL Server FCI using NTFS. Otherwise, avoid using single-instance file systems for shared storage.

To access a disk from multiple instances, use one of the following options:

Hyperdisk performance in multi-writer mode

When you attach a Hyperdisk Balanced or Hyperdisk Balanced High Availability disk in multi-writer mode to multiple instances, the disk's provisioned performance is divided evenly across all instances—even among instances that aren't running or that aren't actively using the disk. However, the maximum performance for each instance is ultimately limited by the throughput and IOPS limits of each instance's machine type.

For example, suppose you attach a Hyperdisk Balanced volume provisioned with 100,000 IOPS to 2 instances. Each instance gets 50,000 IOPS concurrently.

The following table shows how much performance each instance in this example would get depending on how many instances you attach the disk to. Each time you attach a disk to another instance, Compute Engine asynchronously adjusts the performance allotted to each previously attached instance.

# of instances attached 1 2 3 4 5 6 7 8 Max IOPS
per instance 100,000 50,000 ~33,333 25,000 20,000 ~16,667 14285 12,500 Max throughput
per instance
in MiBps 1,200 600 400 300 240 200 ~172 150

When you attach a Hyperdisk Extreme disk in multi-writer mode (Preview) to multiple instances, the disk's provisioned performance is allocated to each instance based on how much performance each instance requires. For example, a single instance could consume the entire provisioned performance of the disk if the other attached instances are idle.

Limitations for sharing Hyperdisk volumes in multi-writer mode Available regions

You can enable multi-writer mode in all the regions where Hyperdisk Balanced, Hyperdisk Balanced High Availability, and Hyperdisk Extreme are available. For a list of supported regions, view the regional availability for your Hyperdisk volume:

I/O fencing with persistent reservations

Google recommends using persistent reservations (PR) with disks in multi writer mode to provide I/O fencing. Persistent reservations manage access to the disk between instances. This prevents data corruption from instances simultaneously writing to the same portion of the disk.

Hyperdisk volumes in multi-writer mode support NVMe (spec 1.2.1) reservations.

Supported reservation modes

The following reservation modes are supported:

  1. Write Exclusive: there will be a single reservation holder and a single writer. All other registrants and non-registrants will only have read access.
  2. Exclusive Access: there will be a single reservation holder which will be the reader and the writer. All other registrants and non-registrants won't have read or write access.
  3. Write Exclusive - Registrants Only: there will be a single reservation holder. All registrants will have read and write access to the disk. The non-registrants will only have read access.
  4. Exclusive Access - Registrant Only: there will be a single reservation holder. All registrants will have read and write access to the disk. The non-registrants won't have read or write access.
  5. Write Exclusive - All registrants: all registrants will be reservation holders and have read and write access to the disk. The non-registrants will only have read access.
  6. Exclusive Access - All registrants: all registrants will be reservation holders and have read and write access to the disk. The non-registrants won't have read or write access.

NVMe Get Features - Host Identifier is supported. The instance number is used as the default Host ID.

The following NVMe reservation features are not supported:

Supported commands

NVMe reservations support the following commands:

NVMe reservations don't support the following commands:

Before you attach a disk in multi-writer mode to multiple instances, you must set the disk's access mode to multi-writer. You can set the access mode for a disk when you create it.

You can also set the access mode for an existing disk, but you must first detach the disk from all instances.

To create and use a new disk in multi-writer mode, follow these steps:

  1. Create the disk, setting its access mode to multi-writer. For instructions, see Add a Hyperdisk to your instance.
  2. Attach the disk to each instance.

To use an existing disk in multi-writer mode, follow these steps:

  1. Detach the disk from all instances.
  2. Set the disk's access mode to multi-writer.
  3. Attach the disk to each instance.
Multi-writer mode for Persistent Disk volumes Caution: Google recommends that you use Hyperdisk Balanced volumes in multi-writer mode instead of SSD Persistent Disk volumes.

Preview

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions.

You can attach an SSD Persistent Disk volume in multi-writer mode to up to two N2 virtual machine (VM) instances simultaneously so that both VMs can read and write to the disk.

If you have more than 2 N2 VMs or you're using any other machine series, you can use one of the following options:

To enable multi-writer mode for new Persistent Disk volumes, create a new Persistent Disk volume and specify the --multi-writer flag in the gcloud CLI or the multiWriter property in the Compute Engine API.

Persistent Disk volumes in multi-writer mode provide a shared block storage capability and present an infrastructural foundation for building distributed storage systems and similar highly available services. When using Persistent Disk volumes in multi-writer mode, use a scale-out storage software system that has the ability to coordinate access to Persistent Disk devices across multiple VMs. Examples of these storage systems include Lustre and IBM Spectrum Scale. Most single VM file systems such as EXT4, XFS, and NTFS are not designed to be used with shared block storage.

For more information, see Best practices in this document. If you require a fully managed file storage, you can mount a Filestore file share on your Compute Engine instances.

Persistent Disk volumes in multi-writer mode support a subset of SCSI-3 Persistent Reservations (SCSI PR) commands. High-availability applications can use these commands for I/O fencing and failover configurations.

The following SCSI PR commands are supported:

For instructions, see Share an SSD Persistent Disk volume in multi-writer mode between VMs.

Supported Persistent Disk types for multi-writer mode

You can simultaneously attach SSD Persistent Disk in multi-writer mode to up to 2 N2 VMs.

Best practices for multi-writer mode Persistent Disk performance in multi-writer mode

Persistent Disk volumes created in multi-writer mode have specific IOPS and throughput limits.

Zonal SSD persistent disk multi-writer mode Maximum sustained IOPS Read IOPS per GB 30 Write IOPS per GB 30 Read IOPS per instance 15,000–100,000* Write IOPS per instance 15,000–100,000* Maximum sustained throughput (MB/s) Read throughput per GB 0.48 Write throughput per GB 0.48 Read throughput per instance 240–1,200* Write throughput per instance 240–1,200*

Attaching a multi-writer disk to multiple virtual machine instances does not affect aggregate performance or cost. Each machine gets a share of the per-disk performance limit.

To learn how to share persistent disks between multiple VMs, see Share persistent disks between VMs.

Restrictions for sharing a disk in multi-writer mode Share an SSD Persistent Disk volume in multi-writer mode between VMs

Preview

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions.

Caution: Google recommends you use Hyperdisk Balanced volumes in multi-writer mode.

You can share an SSD Persistent Disk volume in multi-writer mode between N2 VMs in the same zone. See Persistent Disk multi-writer mode for details about how this mode works. You can create and attach multi-writer Persistent Disk volumes using the following process:

gcloud

Create and attach a zonal Persistent Disk volume by using the gcloud CLI:

  1. Use the gcloud beta compute disks create command command to create a zonal Persistent Disk volume. Include the --multi-writer flag to indicate that the disk must be shareable between the VMs in multi-writer mode.

    gcloud beta compute disks create DISK_NAME \
       --size DISK_SIZE \
       --type pd-ssd \
       --multi-writer
    

    Replace the following:

  2. After you create the disk, attach it to any running or stopped VM with an N2 machine type. Use the gcloud compute instances attach-disk command:

    gcloud compute instances attach-disk INSTANCE_NAME \
       --disk DISK_NAME
    

    Replace the following:

  3. Repeat the gcloud compute instances attach-disk command but replace INSTANCE_NAME with the name of your second VM.

After you create and attach a new disk to an instance, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk. You cannot mount the disk to multiple VMs using the same process you would normally use to mount the disk to a single VM.

REST

Use the Compute Engine API to create and attach an SSD Persistent Disk volume to N2 VMs in multi-writer mode.

  1. In the API, construct a POST request to create a zonal Persistent Disk volume using the disks.insert method. Include the name, sizeGb, and type properties. To create this new disk as an empty and unformatted non-boot disk, don't specify a source image or a source snapshot for this disk. Include the multiWriter property with a value of True to indicate that the disk must be sharable between the VMs in multi-writer mode.

    POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/zones/ZONE/disks
    
    {
    "name": "DISK_NAME",
    "sizeGb": "DISK_SIZE",
    "type": "zones/ZONE/diskTypes/pd-ssd",
    "multiWriter": "True"
    }
    

    Replace the following:

  2. To attach the disk to an instance, construct a POST request to the compute.instances.attachDisk method. Include the URL to the zonal Persistent Disk volume that you just created:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/attachDisk
    
    {
    "source": "/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME"
    }
    

    Replace the following:

  3. To attach the disk to a second VM, repeat the instances.attachDisk command from the previous step. Set the INSTANCE_NAME to the name of the second VM.

After you create and attach a new disk to an instance, format and mount the disk using a shared-disk file system. Most file systems are not capable of using shared storage. Confirm that your file system supports these capabilities before you use it with multi-writer Persistent Disk.

What's next


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4