Applies to: âï¸ Linux VMs âï¸ Windows VMs âï¸ Flexible scale sets âï¸ Uniform scale sets
Azure shared disks is a feature for Azure managed disks that allow you to attach a managed disk to multiple virtual machines (VMs) simultaneously. Attaching a managed disk to multiple VMs allows you to either deploy new or migrate existing clustered applications to Azure.
Shared disks require a cluster manager, like Windows Server Failover Cluster (WSFC), or Pacemaker, that handles cluster node communication and write locking. Shared managed disks don't natively offer a fully managed file system that can be accessed using SMB/NFS.
How it worksVMs in the cluster can read or write to their attached disk based on the reservation chosen by the clustered application using SCSI Persistent Reservations (SCSI PR). SCSI PR is an industry standard used by applications running on Storage Area Network (SAN) on-premises. Enabling SCSI PR on a managed disk allows you to migrate these applications to Azure as-is.
Shared managed disks offer shared block storage that can be accessed from multiple VMs, these are exposed as logical unit numbers (LUNs). LUNs are then presented to an initiator (VM) from a target (disk). These LUNs look like direct-attached-storage (DAS) or a local drive to the VM.
Limitations General limitationsShared disks have general limitations that apply to all shared disks, regardless of disk type. They also have more limitations that only apply to specific types of shared disks. The following list is the list of general limitations:
Each managed disk that has shared disks enabled are also subject to the following limitations, organized by disk type:
Ultra disksUltra disks have their own separate list of limitations, unrelated to shared disks. For ultra disk limitations, refer to Using Azure ultra disks.
When sharing ultra disks, they have the following additional limitations:
Premium SSD v2 managed disks have their own separate list of limitations, unrelated to shared disks. For these limitations, see Premium SSD v2 limitations.
When sharing Premium SSD v2 disks, they have the following additional limitation:
maxShares>1
.maxShares>1
.maxShares>1
.Shared disks support several operating systems. See the Windows or Linux sections for the supported operating systems.
Billing implicationsWhen you share a disk, your billing could be impacted in two different ways, depending on the type of disk.
For shared premium SSD disks, in addition to cost of the disk's tier, there's an extra charge that increases with each VM the SSD is mounted to. See managed disks pricing for details.
Both shared ultra disks and shared premium SSD v2 disks don't have an extra charge for each VM that they're mounted to. They're billed on the total IOPS and MB/s that the disk is configured for. Normally, ultra disks and premium SSD v2 has two performance throttles that determine its total IOPS/MB/s. However, when configured as a shared disk, two more performance throttles are exposed, for a total of four. These two additional throttles allow for increased performance at an extra expense and each meter has a default value, which raises the performance and cost of the disk.
The four performance throttles a shared ultra disk and shared premium SSD v2 disk have are diskIOPSReadWrite, diskMB/sReadWrite, diskIOPSReadOnly, and diskMB/sReadOnly. Each performance throttle can be configured to change the performance of your disk. The performance for shared ultra disk premium SSD v2 disk are calculated in the following ways: total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and for total provisioned throughput MB/s (diskMB/sReadWrite + diskMB/sReadOnly).
Once you've determined your total provisioned IOPS and total provisioned throughput, you can use them in the pricing calculator to determine the cost of an ultra shared disk and a premium SSD v2 shared disk.
Disk sizesFor now, only ultra disks, premium SSD v2, premium SSD, and standard SSDs can enable shared disks. Different disk sizes may have a different maxShares
limit, which you can't exceed when setting the maxShares
value.
For each disk, you can define a maxShares
value that represents the maximum number of nodes that can simultaneously share the disk. For example, if you plan to set up a 2-node failover cluster, you would set maxShares=2
. The maximum value is an upper bound. Nodes can join or leave the cluster (mount or unmount the disk) as long as the number of nodes is lower than the specified maxShares
value.
Note
The maxShares
value can only be set or edited when the disk is detached from all nodes.
The following table illustrates the allowed maximum values for maxShares
by premium SSD sizes:
The IOPS and bandwidth limits for a disk aren't affected by the maxShares
value. For example, the max IOPS of a P15 disk is 1100 whether maxShares = 1 or maxShares > 1.
The following table illustrates the allowed maximum values for maxShares
by standard SSD sizes:
The IOPS and bandwidth limits for a disk aren't affected by the maxShares
value. For example, the max IOPS of a E15 disk is 500 whether maxShares = 1 or maxShares > 1.
The minimum maxShares
value is 1, while the maximum maxShares
value is 15. There are no size restrictions on ultra disks, any size ultra disk can use any value for maxShares
, up to and including the maximum value.
The minimum maxShares
value is 1, while the maximum maxShares
value is 15. There are no size restrictions on Premium SSD v2, any size Premium SSD v2 disk can use any value for maxShares
, up to and including the maximum value.
Azure shared disks are supported on Windows Server 2008 and newer. Most Windows-based clustering builds on WSFC, which handles all core infrastructure for cluster node communication, allowing your applications to take advantage of parallel access patterns. WSFC enables both CSV and non-CSV-based options depending on your version of Windows Server. For details, refer to Create a failover cluster.
Some popular applications running on WSFC include:
Azure shared disks are supported on:
Linux clusters can use cluster managers such as Pacemaker. Pacemaker builds on Corosync, enabling cluster communications for applications deployed in highly available environments. Some common clustered filesystems include ocfs2 and gfs2. You can use SCSI Persistent Reservation (SCSI PR) and/or STONITH Block Device (SBD) based clustering models for arbitrating access to the disk. When using SCSI PR, you can manipulate reservations and registrations using utilities such as fence_scsi and sg_persist.
Persistent reservation flowThe following diagram illustrates a sample 2-node clustered database application that uses SCSI PR to enable failover from one node to the other.
The flow is as follows:
The following diagram illustrates another common clustered workload consisting of multiple nodes reading data from the disk for running parallel processes, such as training of machine learning models.
The flow is as follows:
Both Ultra disks and Premium SSD v2 managed disks offer two extra throttles, giving each of them a total of four throttles. Due to this, the reservation flow can work as described in the earlier section, or it can throttle and distribute performance more granularly.
Performance throttles Premium SSD performance throttles
With premium SSD, the disk IOPS and throughput is fixed, for example, IOPS of a P30 is 5000. This value remains whether the disk is shared across 2 VMs or 5 VMs. The disk limits can be reached from a single VM or divided across two or more VMs.
Ultra Disk and Premium SSD v2 performance throttlesBoth Ultra Disks and Premium SSD v2 managed disks have the unique capability of allowing you to set your performance by exposing modifiable attributes and allowing you to modify them. By default, there are only two modifiable attributes but, shared Ultra Disks and shared Premium SSD v2 managed disks have two more attributes. Ultra Disks and Premium SSD v2 split these attributes across each attached VM. For some examples on how this distribution of capacity, IOPS, and throughput works, see the Examples section.
Attribute Description DiskIOPSReadWrite (Read/write disk IOPS) The total number of IOPS allowed across all VMs mounting the shared disk with write access. DiskMB/sReadWrite (Read/write disk throughput) The total throughput (MB/s) allowed across all VMs mounting the shared disk with write access. DiskIOPSReadOnly* (Read-only disk IOPS) The total number of IOPS allowed across all VMs mounting the shared disk asReadOnly
. DiskMB/sReadOnly* (Read-only disk throughput) The total throughput (MB/s) allowed across all VMs mounting the shared disk as ReadOnly
.
* Applies to shared Ultra Disks and shared Premium SSD v2 managed disks only
The following formulas explain how the performance attributes can be set, since they're user modifiable:
The following examples depict a few scenarios that show how the throttling can work with shared ultra disks, specifically.
The following is an example of a 2-node WSFC using clustered shared volumes. With this configuration, both VMs have simultaneous write-access to the disk, which results in the ReadWrite
throttle being split across the two VMs and the ReadOnly
throttle not being used.
The following is an example of a 2-node WSFC that isn't using clustered shared volumes. With this configuration, only one VM has write-access to the disk. This results in the ReadWrite
throttle being used exclusively for the primary VM and the ReadOnly
throttle only being used by the secondary.
Four node Linux cluster
The following is an example of a 4-node Linux cluster with a single writer and three scale-out readers. With this configuration, only one VM has write-access to the disk. This results in the ReadWrite
throttle being used exclusively for the primary VM and the ReadOnly
throttle being split by the secondary VMs.
Both shared Ultra Disks and shared Premium SSD v2 managed disks are priced based on provisioned capacity, total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and total provisioned Throughput MB/s (diskMB/sReadWrite + diskMB/sReadOnly). There's no extra charge for each additional VM mount. For example, a shared Ultra Disk with the following configuration (diskSizeGB: 1024, DiskIOPSReadWrite: 10000, DiskMB/sReadWrite: 600, DiskIOPSReadOnly: 100, DiskMB/sReadOnly: 1) is charged with 1024 GiB, 10100 IOPS, and 601 MB/s regardless of whether it's mounted to two VMs or five VMs.
Next stepsIf you're interested in enabling and using shared disks for your managed disks, proceed to our article Enable shared disk
If you've additional questions, see the shared disks section of the FAQ.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4