Stay organized with collections Save and categorize content based on your preferences.
Linux Windows
This document explains how to create a VM that uses a machine type from the A3 High, A3 Mega, A3 Edge, A2, and G2 machine series. To learn more about creating VMs with attached GPUs, see Overview of creating an instance with attached GPUs.
Tip: When provisioning A3 Ultra machine types, you must reserve capacity to create instances or clusters, use Spot VMs, or create a resize request in a MIG. For more information about the parameters to set when creating an A3 Ultra instance, see Create an A3 Ultra or A4 instance. Before you beginSelect the tab for how you plan to use the samples on this page:
ConsoleWhen you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloudInstall the Google Cloud CLI. After installation, initialize the Google Cloud CLI by running the following command:
gcloud init
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update
.To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI. After installation, initialize the Google Cloud CLI by running the following command:
gcloud init
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
To get the permissions that you need to create VMs, ask your administrator to grant you the Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1
) IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations.
This predefined role contains the permissions required to create VMs. To see the exact permissions that are required, expand the Required permissions section:
Required permissionsThe following permissions are required to create VMs:
compute.instances.create
on the project compute.images.useReadOnly
on the image compute.snapshots.useReadOnly
on the snapshot compute.instanceTemplates.useReadOnly
on the instance template compute.networks.use
on the project compute.addresses.use
on the project compute.networks.useExternalIp
on the project compute.subnetworks.use
on the project or on the chosen subnet compute.subnetworks.useExternalIp
on the project or on the chosen subnet compute.instances.setMetadata
on the project compute.instances.setTags
on the VM compute.instances.setLabels
on the VM compute.instances.setServiceAccount
on the VM compute.disks.create
on the project compute.disks.use
on the disk compute.disks.useReadOnly
on the diskYou might also be able to get these permissions with custom roles or other predefined roles.
Create a VM that has attached GPUsYou can create an A3 High, A3 Mega, A3 Edge, A2, or G2 accelerator-optimized VM by using the Google Cloud console, Google Cloud CLI, or REST.
To make some customizations to your G2 VMs, you might need to use the Google Cloud CLI or REST. See G2 limitations.
ConsoleIn the Google Cloud console, go to the Create an instance page.
Specify a Name for your VM. See Resource naming convention.
Select a region and zone where GPUs are available. See the list of available GPU regions and zones.
In the Machine configuration section, select the GPUs machine family.
Complete one of the following steps to select either a predefined or custom machine type based on the machine series:
For all GPU machine series, you can select a predefined machine type as follows:
In the GPU type list, select your GPU type.
NVIDIA H100 80GB
, or NVIDIA H100 80GB MEGA
.NVIDIA A100 40GB
or NVIDIA A100 80GB
.NVIDIA L4
.In the Number of GPUs list, select the number of GPUs.
Note: Each accelerator-optimized machine type has a fixed number of GPUs attached. If you adjust the number of GPUs, the machine type changes.For the G2 machine series, you can select a custom machine type as follows:
NVIDIA L4
.Optional: The G2 machine series supports NVIDIA RTX Virtual Workstations (vWS) for graphics workloads. If you plan on running graphics-intensive workloads on your G2 VM, select Enable Virtual Workstation (NVIDIA GRID).
In the Boot disk section, click Change. This opens the Boot disk configuration page.
On the Boot disk configuration page, do the following:
Optional: Configure provisioning model. For example, if your workload is fault-tolerant and can withstand possible VM preemption, consider using Spot VMs to reduce the cost of your VMs and the attached GPUs. For more information, see GPUs on Spot VMs. To do this, complete the following steps:
To create and start the VM, click Create.
To create and start a VM, use the gcloud compute instances create
command with the following flags. VMs with GPUs can't live migrate, make sure that you set the --maintenance-policy=TERMINATE
flag.
The following optional flags are shown in the sample command:
--provisioning-model=SPOT
flag which configures your VMs as Spot VMs. If your workload is fault-tolerant and can withstand possible VM preemption, consider using Spot VMs to reduce the cost of your VMs and the attached GPUs. For more information, see GPUs on Spot VMs. For Spot VMs, the automatic restart and host maintenance options flags are disabled.--accelerator
flag to specify a virtual workstation. NVIDIA RTX Virtual Workstations (vWS) are supported for only G2 VMs.gcloud compute instances create VM_NAME \ --machine-type=MACHINE_TYPE \ --zone=ZONE \ --boot-disk-size=DISK_SIZE \ --image=IMAGE \ --image-project=IMAGE_PROJECT \ --maintenance-policy=TERMINATE \ [--provisioning-model=SPOT] \ [--accelerator=type=nvidia-l4-vws,count=VWS_ACCELERATOR_COUNT]Replace the following:
VM_NAME
: the name for the new VM.MACHINE_TYPE
: the machine type that you selected. Choose from one of the following:
--machine-type=g2-custom-4-19456
.ZONE
: the zone for the VM. This zone must support your selected GPU model.DISK_SIZE
: the size of your boot disk in GB. Specify a boot disk size of at least 40 GB.IMAGE
: an operating system image that supports GPUs. If you want to use the latest image in an image family, replace the --image
flag with the --image-family
flag and set its value to an image family that supports GPUs. For example: --image-family=rocky-linux-8-optimized-gcp
.IMAGE_PROJECT
: the Compute Engine image project that the OS image belongs to. If using a custom image or Deep Learning VM Images, specify the project that those images belong to.VWS_ACCELERATOR_COUNT
: the number of virtual GPUs that you need.Send a POST request to the instances.insert
method. VMs with GPUs can't live migrate, make sure you set the onHostMaintenance
parameter to TERMINATE
.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances { "machineType": "projects/PROJECT_ID/zones/ZONE/machineTypes/MACHINE_TYPE", "disks": [ { "type": "PERSISTENT", "initializeParams": { "diskSizeGb": "DISK_SIZE", "sourceImage": "SOURCE_IMAGE_URI" }, "boot": true } ], "name": "VM_NAME", "networkInterfaces": [ { "network": "projects/PROJECT_ID/global/networks/NETWORK" } ], "scheduling": { "onHostMaintenance": "terminate", ["automaticRestart": true] }, }Replace the following:
VM_NAME
: the name for the new VM.PROJECT_ID
: your Project ID.ZONE
: the zone for the VM. This zone must support your selected GPU model.MACHINE_TYPE
: the machine type that you selected. Choose from one of the following:
--machine-type=g2-custom-4-19456
.SOURCE_IMAGE_URI
: the URI for the specific image or image family that you want to use. For example:
"sourceImage": "projects/rocky-linux-cloud/global/images/rocky-linux-8-optimized-gcp-v20220719"
"sourceImage": "projects/rocky-linux-cloud/global/images/family/rocky-linux-8-optimized-gcp"
DISK_SIZE
: the size of your boot disk in GB. Specify a boot disk size of at least 40 GB.NETWORK
: the VPC network that you want to use for the VM. You can specify `default` to use your default network."provisioningModel": "SPOT"
option to your request. For Spot VMs, the automatic restart and host maintenance options flags are disabled.
"scheduling": { "provisioningModel": "SPOT" }
VWS_ACCELERATOR_COUNT
with the number of virtual GPUs that you need.
"guestAccelerators": [ { "acceleratorCount": VWS_ACCELERATOR_COUNT, "acceleratorType": "projects/PROJECT_ID/zones/ZONEacceleratorTypes/nvidia-l4-vws" } ]
For the VM to use the GPU, you need to Install the GPU driver on your VM.
ExamplesIn these examples, most of the VMs are created by using the Google Cloud CLI. However, you can also use either the Google Cloud console or REST to create these VMs.
The following examples show how to create VMs using the following images:
a2-highgpu-1g
) VM.a3-highgpu-8g
or a3-edgegpu-8g
VM.Public image. This example uses a G2 VM.
You can create either a3-edgegpu-8g
or a3-highgpu-8g
VMs that have attached H100 GPUs by using Container-optimized (COS) images.
For detailed instructions on how to create these a3-edgegpu-8g
or a3-highgpu-8g
VMs that use Container-Optimized OS, see Create an A3 VM with GPUDirect-TCPX enabled.
You can create VMs that have attached GPUs that use either a public image that is available on Compute Engine or a custom image.
To create a VM using the most recent, non-deprecated image from the Rocky Linux 8 optimized for Google Cloud image family that uses the g2-standard-8
machine type and has an NVIDIA RTX Virtual Workstation, complete the following steps:
Create the VM. In this example, optional flags such as boot disk type and size are also specified.
gcloud compute instances create VM_NAME \ --project=PROJECT_ID \ --zone=ZONE \ --machine-type=g2-standard-8 \ --maintenance-policy=TERMINATE --restart-on-failure \ --network-interface=nic-type=GVNIC \ --accelerator=type=nvidia-l4-vws,count=1 \ --image-family=rocky-linux-8-optimized-gcp \ --image-project=rocky-linux-cloud \ --boot-disk-size=200GB \ --boot-disk-type=pd-ssd
Replace the following:
VM_NAME
: the name of your VMPROJECT_ID
: your project ID.ZONE
: the zone for the VM.Install NVIDIA driver and CUDA. For NVIDIA L4 GPUs, CUDA version XX or higher is required.
Using DLVM images is the easiest way to get started because these images already have the NVIDIA drivers and CUDA libraries pre-installed.
These images also provide performance optimizations.
The following DLVM images are supported for NVIDIA A100:
common-cu110
: NVIDIA driver and CUDA pre-installedtf-ent-1-15-cu110
: NVIDIA driver, CUDA, TensorFlow Enterprise 1.15.3 pre-installedtf2-ent-2-1-cu110
: NVIDIA driver, CUDA, TensorFlow Enterprise 2.1.1 pre-installedtf2-ent-2-3-cu110
: NVIDIA driver, CUDA, TensorFlow Enterprise 2.3.1 pre-installedpytorch-1-6-cu110
: NVIDIA driver, CUDA, Pytorch 1.6For more information about the DLVM images that are available, and the packages installed on the images, see the Deep Learning VM documentation.
Create a VM using the tf2-ent-2-3-cu110
image and the a2-highgpu-1g
machine type. In this example, optional flags such as boot disk size and scope are specified.
gcloud compute instances create VM_NAME \ --project PROJECT_ID \ --zone ZONE \ --machine-type a2-highgpu-1g \ --maintenance-policy TERMINATE \ --image-family tf2-ent-2-3-cu110 \ --image-project deeplearning-platform-release \ --boot-disk-size 200GB \ --metadata "install-nvidia-driver=True,proxy-mode=project_editors" \ --scopes https://www.googleapis.com/auth/cloud-platform
Replace the following:
VM_NAME
: the name of your VMPROJECT_ID
: your project ID.ZONE
: the zone for the VMThe preceding example command also generates a Vertex AI Workbench user-managed notebooks instance for the VM. To access the notebook, in the Google Cloud console, go to the Vertex AI Workbench > User-managed notebooks page.
A Multi-Instance GPU partitions a single NVIDIA H100 or A100 GPU within the same VM into as many as seven independent GPU instances. They run simultaneously, each with its own memory, cache and streaming multiprocessors. This setup enables the NVIDIA H100 or A100 GPU to deliver guaranteed quality-of-service (QoS) at up to 7x higher utilization compared to earlier GPU models.
You can create up to seven Multi-instance GPUs. For A100 40GB GPUs, each Multi-instance GPU is allocated 5 GB of memory. With the A100 80GB and H100 80GB GPUs the allocated memory doubles to 10 GB each.
For more information about using Multi-Instance GPUs, see NVIDIA Multi-Instance GPU User Guide.
To create Multi-Instance GPUs, complete the following steps:
Create an A3 High, A3 Mega, A3 Edge, or A2 accelerator-optimized VM.
Enable NVIDIA GPU drivers.
Pro Tip: You can skip this step by creating VMs with Deep Learning VM Images. Each Deep Learning VM Images has an NVIDIA GPU driver pre-installed.
Enable Multi-Instance GPUs..
sudo nvidia-smi -mig 1
Review the Multi-Instance GPU shapes that are available.
sudo nvidia-smi mig --list-gpu-instance-profiles
The output is similar to the following:
+-----------------------------------------------------------------------------+ | GPU instance profiles: | | GPU Name ID Instances Memory P2P SM DEC ENC | | Free/Total GiB CE JPEG OFA | |=============================================================================| | 0 MIG 1g.10gb 19 7/7 9.62 No 16 1 0 | | 1 1 0 | +-----------------------------------------------------------------------------+ | 0 MIG 1g.10gb+me 20 1/1 9.62 No 16 1 0 | | 1 1 1 | +-----------------------------------------------------------------------------+ | 0 MIG 1g.20gb 15 4/4 19.50 No 26 1 0 | | 1 1 0 | +-----------------------------------------------------------------------------+ | 0 MIG 2g.20gb 14 3/3 19.50 No 32 2 0 | | 2 2 0 | +-----------------------------------------------------------------------------+ | 0 MIG 3g.40gb 9 2/2 39.25 No 60 3 0 | | 3 3 0 | +-----------------------------------------------------------------------------+ .......
Create the Multi-Instance GPU (GI) and associated compute instances (CI) that you want. You can create these instances by specifying either the full or shortened profile name, profile ID, or a combination of both. For more information, see Creating GPU Instances.
The following example creates two MIG 3g.20gb
GPU instances by using the profile ID (9
).
The -C
flag is also specified which creates the associated compute instances for the required profile.
sudo nvidia-smi mig -cgi 9,9 -C
Check that the two Multi-Instance GPUs are created:
sudo nvidia-smi mig -lgi
Check that both the GIs and corresponding CIs are created.
sudo nvidia-smi
The output is similar to the following:
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.125.06 Driver Version: 525.125.06 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA H100 80G... Off | 00000000:04:00.0 Off | On | | N/A 33C P0 70W / 700W | 39MiB / 81559MiB | N/A Default | | | | Enabled | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA H100 80G... Off | 00000000:05:00.0 Off | On | | N/A 32C P0 69W / 700W | 39MiB / 81559MiB | N/A Default | | | | Enabled | +-------------------------------+----------------------+----------------------+ ...... +-----------------------------------------------------------------------------+ | MIG devices: | +------------------+----------------------+-----------+-----------------------+ | GPU GI CI MIG | Memory-Usage | Vol| Shared | | ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG| | | | ECC| | |==================+======================+===========+=======================| | 0 1 0 0 | 19MiB / 40192MiB | 60 0 | 3 0 3 0 3 | | | 0MiB / 65535MiB | | | +------------------+----------------------+-----------+-----------------------+ | 0 2 0 1 | 19MiB / 40192MiB | 60 0 | 3 0 3 0 3 | | | 0MiB / 65535MiB | | | +------------------+----------------------+-----------+-----------------------+ ...... +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["This document details the process of creating a Virtual Machine (VM) that utilizes accelerator-optimized machine families, including A3, A2, and G2 machine series, for enhanced performance."],["Users can set up and authenticate their Google Cloud environment using the console, gcloud CLI, or REST API, with specific instructions for each method provided."],["Creating a VM with GPUs involves selecting a GPU type (NVIDIA H200, H100, A100, or L4), the number of GPUs, and configuring the boot disk with a minimum size of 40 GB, along with the option of using Spot VMs for cost reduction."],["The document outlines specific limitations and requirements for each accelerator-optimized machine type (A3 Ultra/Mega/High/Edge, A2 Ultra/Standard, G2), such as operating system compatibility, discount availability, and supported disk types, to help the user when selecting what is appropriate for their use."],["Multi-Instance GPU (MIG) setup is described for A3 and A2 VMs, allowing for the partitioning of a single GPU into multiple independent GPU instances, and instructions are provided on how to create VMs with various images such as Deep Learning VM Images, Container-Optimized OS images, and public OS images."]]],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4