Stay organized with collections Save and categorize content based on your preferences.
Local SSD performance limits provided in the Choose a storage option section were achieved by using specific settings on the Local SSD instance. If your virtual machine (VM) instance is having trouble reaching these performance limits and you have already configured the instance using the recommended local SSD settings, you can compare your performance limits against the published limits by replicating the settings used by the Compute Engine team.
Warning: The script used in this section is intended for benchmarking and performance comparisons only and is not intended to optimize your disk for performance. We strongly recommend against running this script on a VM with a Local SSD where you want to keep the data because this script discards any data on your Local SSD.These instructions assume that you are using a Linux operating system with the apt
package manager installed.
/dev/disk/by-id/google-local-nvme-ssd-0
with /dev/disk/by-id/google-local-ssd-0
in the following commands. Create a VM with one Local SSD device
The number of Local SSD that a VM can have is based on the machine type you use to create the VM. For details, see Choosing a valid number of Local SSDs.
Create a Local SSD instance that has four or eight vCPUs for each device, depending on your workload.
For example, the following command creates a C3 VM with 4 vCPUs and 1 Local SSD.
gcloud compute instances create c3-ssd-test-instance \
--machine-type "c3-standard-4-lssd"
For second generation and earlier machine types, you specify the number of Local SSD to attach to the VM using the --local-ssd
flag. The following command creates an N2 VM with 8 vCPUs and 1 Local SSD that uses the NVMe disk interface:
gcloud compute instances create ssd-test-instance \
--machine-type "n2-standard-8" \
--local-ssd interface=nvme
Run the following script on your VM. The script replicates the settings used to achieve the SSD performance figures provided in the performance section. Note that the --bs
parameter defines the block size, which affects the results for different types of read and write operations.
# install tools
sudo apt-get -y update
sudo apt-get install -y fio util-linux
# discard Local SSD sectors
sudo blkdiscard /dev/disk/by-id/google-local-nvme-ssd-0
# full write pass - measures write bandwidth with 1M blocksize
sudo fio --name=writefile \
--filename=/dev/disk/by-id/google-local-nvme-ssd-0 --bs=1M --nrfiles=1 \
--direct=1 --sync=0 --randrepeat=0 --rw=write --end_fsync=1 \
--iodepth=128 --ioengine=libaio
# rand read - measures max read IOPS with 4k blocks
sudo fio --time_based --name=readbenchmark --runtime=30 --ioengine=libaio \
--filename=/dev/disk/by-id/google-local-nvme-ssd-0 --randrepeat=0 \
--iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 \
--numjobs=4 --rw=randread --blocksize=4k --group_reporting
# rand write - measures max write IOPS with 4k blocks
sudo fio --time_based --name=writebenchmark --runtime=30 --ioengine=libaio \
--filename=/dev/disk/by-id/google-local-nvme-ssd-0 --randrepeat=0 \
--iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 \
--numjobs=4 --rw=randwrite --blocksize=4k --group_reporting
If you want to attach 24 or more Local SSD devices to an instance, use a machine type with 32 or more vCPUs.
The following commands create a VM with the maximum allowed number of Local SSD disks using the NVMe interface:
Attach Local SSD to VMgcloud compute instances create ssd-test-instance \
--machine-type "n1-standard-32" \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme \
--local-ssd interface=nvme
Use -lssd machine types
Newer machine series offer -lssd
machine types that come with a predetermined number of Local SSD disks. For example, to benchmark a VM with 32 Local SSD (12 TiB capacity), use the following command:
gcloud compute instances create ssd-test-instance \
--machine-type "c3-standard-176-lssd"
Install the mdadm
tool. The install process for mdadm
includes a user prompt that halts scripts, so run the process manually:
sudo apt update && sudo apt install mdadm --no-install-recommends
CentOS and RHEL
sudo yum install mdadm -y
SLES and openSUSE
sudo zypper install -y mdadm
Use the find
command to identify all of the Local SSDs that you want to mount together:
find /dev/ | grep google-local-nvme-ssd
The output looks similar to the following:
/dev/disk/by-id/google-local-nvme-ssd-23 /dev/disk/by-id/google-local-nvme-ssd-22 /dev/disk/by-id/google-local-nvme-ssd-21 /dev/disk/by-id/google-local-nvme-ssd-20 /dev/disk/by-id/google-local-nvme-ssd-19 /dev/disk/by-id/google-local-nvme-ssd-18 /dev/disk/by-id/google-local-nvme-ssd-17 /dev/disk/by-id/google-local-nvme-ssd-16 /dev/disk/by-id/google-local-nvme-ssd-15 /dev/disk/by-id/google-local-nvme-ssd-14 /dev/disk/by-id/google-local-nvme-ssd-13 /dev/disk/by-id/google-local-nvme-ssd-12 /dev/disk/by-id/google-local-nvme-ssd-11 /dev/disk/by-id/google-local-nvme-ssd-10 /dev/disk/by-id/google-local-nvme-ssd-9 /dev/disk/by-id/google-local-nvme-ssd-8 /dev/disk/by-id/google-local-nvme-ssd-7 /dev/disk/by-id/google-local-nvme-ssd-6 /dev/disk/by-id/google-local-nvme-ssd-5 /dev/disk/by-id/google-local-nvme-ssd-4 /dev/disk/by-id/google-local-nvme-ssd-3 /dev/disk/by-id/google-local-nvme-ssd-2 /dev/disk/by-id/google-local-nvme-ssd-1 /dev/disk/by-id/google-local-nvme-ssd-0
find
does not guarantee an ordering. It's alright if the devices are listed in a different order as long as number of output lines match the expected number of SSD partitions.
If using SCSI devices, use the following find
command:
find /dev/ | grep google-local-ssd
NVMe devices should all be of form google-local-nvme-ssd-#
and SCSI devices should all be of form google-local-ssd-#
.
Use the mdadm
tool to combine multiple Local SSD devices into a single array named /dev/md0
. The following example merges twenty four Local SSD devices that use the NVMe interface. For Local SSD devices that use SCSI, use the device names returned from the find
command in step 3.
sudo mdadm --create /dev/md0 --level=0 --raid-devices=24 \
/dev/disk/by-id/google-local-nvme-ssd-0 \
/dev/disk/by-id/google-local-nvme-ssd-1 \
/dev/disk/by-id/google-local-nvme-ssd-2 \
/dev/disk/by-id/google-local-nvme-ssd-3 \
/dev/disk/by-id/google-local-nvme-ssd-4 \
/dev/disk/by-id/google-local-nvme-ssd-5 \
/dev/disk/by-id/google-local-nvme-ssd-6 \
/dev/disk/by-id/google-local-nvme-ssd-7 \
/dev/disk/by-id/google-local-nvme-ssd-8 \
/dev/disk/by-id/google-local-nvme-ssd-9 \
/dev/disk/by-id/google-local-nvme-ssd-10 \
/dev/disk/by-id/google-local-nvme-ssd-11 \
/dev/disk/by-id/google-local-nvme-ssd-12 \
/dev/disk/by-id/google-local-nvme-ssd-13 \
/dev/disk/by-id/google-local-nvme-ssd-14 \
/dev/disk/by-id/google-local-nvme-ssd-15 \
/dev/disk/by-id/google-local-nvme-ssd-16 \
/dev/disk/by-id/google-local-nvme-ssd-17 \
/dev/disk/by-id/google-local-nvme-ssd-18 \
/dev/disk/by-id/google-local-nvme-ssd-19 \
/dev/disk/by-id/google-local-nvme-ssd-20 \
/dev/disk/by-id/google-local-nvme-ssd-21 \
/dev/disk/by-id/google-local-nvme-ssd-22 \
/dev/disk/by-id/google-local-nvme-ssd-23
The response is similar to the following:
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
You can confirm the details of the array with mdadm --detail
. Adding the --prefer=by-id
flag will list the devices using the /dev/disk/by-id
paths.
sudo mdadm --detail --prefer=by-id /dev/md0
The output should look similar to the following for each device in the array.
...
Number Major Minor RaidDevice State
0 259 0 0 active sync /dev/disk/by-id/google-local-nvme-ssd-0
...
Run the following script on your VM. The script replicates the settings used to achieve the SSD performance figures provided in the performance section. that the --bs
parameter defines the block size, which affects the results for different types of read and write operations.
# install tools
sudo apt-get -y update
sudo apt-get install -y fio util-linux
# full write pass - measures write bandwidth with 1M blocksize
sudo fio --name=writefile \
--filename=/dev/md0 --bs=1M --nrfiles=1 \
--direct=1 --sync=0 --randrepeat=0 --rw=write --end_fsync=1 \
--iodepth=128 --ioengine=libaio
# rand read - measures max read IOPS with 4k blocks
sudo fio --time_based --name=benchmark --runtime=30 \
--filename=/dev/md0 --ioengine=libaio --randrepeat=0 \
--iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 \
--numjobs=48 --rw=randread --blocksize=4k --group_reporting --norandommap
# rand write - measures max write IOPS with 4k blocks
sudo fio --time_based --name=benchmark --runtime=30 \
--filename=/dev/md0 --ioengine=libaio --randrepeat=0 \
--iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 \
--numjobs=48 --rw=randwrite --blocksize=4k --group_reporting --norandommap
Storage Optimized VMs (like the Z3 Family) should be benchmarked directly against the device partitions. You can get the partition names with lsblk
lsblk -o name,size -lpn | grep 2.9T | awk '{print $1}'
The output looks similar to the following:
/dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1 /dev/nvme6n1 /dev/nvme7n1 /dev/nvme8n1 /dev/nvme9n1 /dev/nvme10n1 /dev/nvme11n1 /dev/nvme12n1
Directly run the benchmarks against the Local SSD partitions by separating them with colon delimiters.
# install benchmarking tools
sudo apt-get -y update
sudo apt-get install -y fio util-linux
# Full Write Pass.
# SOVM achieves max read performance on previously written/discarded ranges.
sudo fio --readwrite=write --blocksize=1m --iodepth=4 --ioengine=libaio \
--direct=1 --group_reporting \
--name=job1 --filename=/dev/nvme1n1 --name=job2 --filename=/dev/nvme2n1 \
--name=job3 --filename=/dev/nvme3n1 --name=job4 --filename=/dev/nvme4n1 \
--name=job5 --filename=/dev/nvme5n1 --name=job6 --filename=/dev/nvme6n1 \
--name=job7 --filename=/dev/nvme7n1 --name=job8 --filename=/dev/nvme8n1 \
--name=job9 --filename=/dev/nvme9n1 --name=job10 --filename=/dev/nvme10n1 \
--name=job11 --filename=/dev/nvme11n1 --name=job12 --filename=/dev/nvme12n1
# rand read - measures max read IOPS with 4k blocks
sudo fio --readwrite=randread --blocksize=4k --iodepth=128 \
--numjobs=4 --direct=1 --runtime=30 --group_reporting --ioengine=libaio \
--name=job1 --filename=/dev/nvme1n1 --name=job2 --filename=/dev/nvme2n1 \
--name=job3 --filename=/dev/nvme3n1 --name=job4 --filename=/dev/nvme4n1 \
--name=job5 --filename=/dev/nvme5n1 --name=job6 --filename=/dev/nvme6n1 \
--name=job7 --filename=/dev/nvme7n1 --name=job8 --filename=/dev/nvme8n1 \
--name=job9 --filename=/dev/nvme9n1 --name=job10 --filename=/dev/nvme10n1 \
--name=job11 --filename=/dev/nvme11n1 --name=job12 --filename=/dev/nvme12n1
# rand write - measures max write IOPS with 4k blocks
sudo fio --readwrite=randwrite --blocksize=4k --iodepth=128 \
--numjobs=4 --direct=1 --runtime=30 --group_reporting --ioengine=libaio \
--name=job1 --filename=/dev/nvme1n1 --name=job2 --filename=/dev/nvme2n1 \
--name=job3 --filename=/dev/nvme3n1 --name=job4 --filename=/dev/nvme4n1 \
--name=job5 --filename=/dev/nvme5n1 --name=job6 --filename=/dev/nvme6n1 \
--name=job7 --filename=/dev/nvme7n1 --name=job8 --filename=/dev/nvme8n1 \
--name=job9 --filename=/dev/nvme9n1 --name=job10 --filename=/dev/nvme10n1 \
--name=job11 --filename=/dev/nvme11n1 --name=job12 --filename=/dev/nvme12n1
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["This document provides instructions for benchmarking Local SSD performance, utilizing specific settings to replicate the Compute Engine team's test environment."],["A script is provided for benchmarking, but it is critical to note that running this script will discard all data on the Local SSD, making it suitable only for performance testing, not for production environments."],["The number of Local SSDs a VM can have is dependent on the VM's machine type, with options to create VMs with a single Local SSD or with the maximum allowed number of SSDs."],["For VMs with multiple Local SSD devices, the `mdadm` tool can be used to combine them into a single array for testing, while Storage Optimized VMs should be benchmarked against individual device partitions."],["Benchmarking instructions are provided for both single Local SSD setups, multiple Local SSD setups, and Storage Optimized VMs, and includes a detailed description on how to run read/write performance tests with fio."]]],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4