Stay organized with collections Save and categorize content based on your preferences.
This document describes the machine families, machine series, and machine types that you can choose from to create a virtual machine (VM) instance or bare metal instance with the resources that you need. When you create a compute instance, you select a machine type from a machine family that determines the resources available to that instance.
There are several machine families you can choose from. Each machine family is further organized into machine series and predefined machine types within each series. For example, within the N2 machine series in the general-purpose machine family, you can select the n2-standard-4
machine type.
For information about machine series that support Spot VMs (and preemptible VMs), see Compute Engine instances provisioning models.
Note:This is a list of Compute Engine machine families. For a detailed explanation of each machine family, see the following pages:
This documentation uses the following terms:
Machine family: A curated set of processor and hardware configurations optimized for specific workloads, for example, General-purpose, Accelerator-optimized, or Memory-optimized.
Machine series: Machine families are further classified by series, generation, and processor type.
Each series focuses on a different aspect of computing power or performance. For example, the E series offers efficient VMs at a low cost, while the C series offer better performance.
The generation is denoted by an ascending number. For example, the N1 series within the general-purpose machine family is the older version of the N2 series. A higher generation or series number usually indicates newer underlying CPU platforms or technologies. For example, the M3 series, which runs on Intel Xeon Scalable Processor 3rd Generation (Ice Lake), is a newer generation than the M2 series, which runs on Intel Xeon Scalable Processor 2nd Generation (Cascade Lake).
Generation Intel AMD Arm 4th generation machine series N4, C4, X4, M4, A4 C4D, G4 C4A, A4X 3rd generation machine series C3, H3, Z3, M3, A3 C3D N/A 2nd generation machine series N2, E2, C2, M2, A2, G2 N2D, C2D, T2D, E2 T2AMachine type: Every machine series offers at least one machine type. Each machine type provides a set of resources for your compute instance, such as vCPUs, memory, disks, and GPUs. If a predefined machine type does not meet your needs, you can also create a custom machine type for some machine series.
The following sections describe the different machine types.
Predefined machine typesPredefined machine types come with a non-configurable amount of memory and vCPUs. Predefined machine types use a variety of vCPU to memory ratios:
highcpu
— from 1 to 3 GB memory per vCPU; typically, 2 GB memory per vCPU.standard
— from 3 to 7 GB memory per vCPU; typically, 4 GB memory per vCPU.highmem
— from 7 to 14 GB memory per vCPU; typically, 8 GB memory per vCPU.megamem
— from 14 to 19 GB memory per vCPUhypermem
— from 19 to 24 GB memory per vCPU; typically, 21 GB memory per vCPUultramem
— from 24 to 31 GB memory per vCPUFor example, a c3-standard-22
machine type has 22 vCPUs, and as a standard
machine type, it also has 88 GB of memory.
Local SSD machine types are special predefined machine types. The machine type names include lssd
. When you create a compute instance using one of the following machine types, Titanium SSD or Local SSD disks are automatically attached to the instance:
-lssd
: Available with the C4, C4A, C4D, C3, and C3D machine series, these machine types attach a predetermined number of 375 GiB Local SSD disks to the instance. Examples of this machine type include c4a-standard-4-lssd
, c3-standard-88-lssd
, and c3d-highmem-360-lssd
.-standardlssd
: Available with the storage-optimized Z3 machine series, these machine types provide up to 350 GiB of Titanium SSD disk capacity per vCPU. These machine types are recommended for high performance search and data analysis for medium sized data sets. An example of this machine type is z3-highmem-22-standardlssd
.-highlssd
: Available with the Z3 machine series, these machine types provide between of 350 GiB and 600 GiB of Titanium SSD disk capacity per vCPU. These machine types offer high performance and are recommended for storage intensive streaming and data analysis for large sized data sets. An example of this machine type is z3-highmem-88-highlssd
.Other machine series also support Local SSD disks but don't use a machine type name that includes lssd
. For a list of all the machine types that you can use with Titanium SSD or Local SSD disks, see Choose a valid number of Local SSD disks.
Bare metal machine types are a special predefined machine type. The machine type name ends in -metal
. When you create a compute instance using one of these machine types, there is no hypervisor installed on the instance. You can attach disks to a bare metal instance, just as you would with a VM instance. Bare metal instances can be used in VPC networks and subnetworks in the same way as VM instances.
These machine types are available with the C4, C4D, C3, X4, and Z3 machine series. For more information, see Bare metal instances on Compute Engine.
Custom machine typesIf none of the predefined machine types match your workload needs, you can create a VM instance with a custom machine type for the N and E machine series in the general-purpose machine family. .
Custom machine types cost slightly more to use compared to an equivalent predefined machine type. Also, there are limitations in the amount of memory and vCPUs that you can select for a custom machine type. The on-demand prices for custom machine types include a 5% premium over the on-demand and commitment prices for predefined machine types.
When creating a custom machine type, you can use the extended memory feature. Instead of using the default memory size based on the number of vCPUs you select, you can specify an amount of memory, up to the limit for the machine series.
For more information, see Create a VM with a custom machine type.
The E2 and N1 series contain shared-core machine types. These machine types timeshare a physical core which can be a cost-effective method for running small, non-resource intensive apps.
E2: offers e2-micro
, e2-small
, and e2-medium
shared-core machine types with 2 vCPUs for short periods of bursting.
N1: offers f1-micro
and g1-small
shared-core machine types which have up to 1 vCPU available for short periods of bursting.
For more information, see CPU bursting.
Machine family and series recommendationsThe following tables provide recommendations for different workloads.
General-purpose workloads N4, N2, N2D, N1 C4, C4A, C4D, C3, C3D E2 Tau T2D, Tau T2A Balanced price/performance across a wide range of machine types Consistently high performance for a variety of workloads Day-to-day computing at a lower cost Best per-core performance/cost for scale-out workloadsAfter you create a compute instance, you can use rightsizing recommendations to optimize resource utilization based on your workload. For more information, see Applying machine type recommendations for VMs.
General-purpose machine family guideThe general-purpose machine family offers several machine series with the best price-performance ratio for a variety of workloads.
Compute Engine offers general-purpose machine series that run on either x86 or Arm architecture.
x86highcpu
(2 GB per vCPU), standard
(3.75 GB per vCPU), and highmem
(7.75 GB per vCPU) configurations.highcpu
(1.875 GB per vCPU), standard
(3.875 GB per vCPU), and highmem
(7.875 GB per vCPU) configurations.highcpu
(2 GB per vCPU), standard
(4 GB per vCPU), and highmem
(8 GB per vCPU) configurations.The C4A machine series is the second machine series in Google Cloud to run on Arm processors and the first to run on Google Axion Processors, which support the Arm V9 architecture. C4A instances are powered by the Titanium IPU with disk and network offloads; this improves instance performance by reducing on-host processing.
C4A instances provide up to 72 vCPUs with up to 8 GB of memory per vCPU in a single UMA domain. C4A offers -lssd
machine types that come with up to 6 TiB of Titanium SSD capacity. C4A instances don't use simultaneous multithreading (SMT). A vCPU in a C4A instance is equivalent to an entire physical core.
The Tau T2A machine series is the first machine series in Google Cloud to run on Arm processors. Tau T2A machines are optimized to deliver compelling price for performance. Each VM can have up to 48 vCPUs with 4 GB of memory per vCPU. The Tau T2A machine series runs on a 64 core Ampere Altra processor with an Arm instruction set and an all-core frequency of 3 GHz. Tau T2A machine types support a single NUMA node and a vCPU is equivalent to an entire core.
The storage-optimized machine family is best suited for high-performance and flash-optimized workloads such as SQL, NoSQL, and vector databases, scale-out data analytics, data warehouses and search, and distributed file systems that need fast access to large amounts of data stored in local storage. The storage-optimized machine family is designed to provide high local storage throughput and IOPS at sub-millisecond latency.
standardlssd
instances can have up to 176 vCPUs, 1,408 GB of memory, and 36 TiB of Titanium SSD.highlssd
instances can have up to 88 vCPUs, 704 GB of memory, and 36 TiB of Titanium SSD.Z3 runs on the Intel Xeon Scalable processor (code name Sapphire Rapids) with DDR5 memory and Titanium offload processors. Z3 brings together compute, networking, and storage innovations into one platform. Z3 instances are aligned with the underlying NUMA architecture to offer optimal, reliable, and consistent performance.
Compute-optimized machine family guideThe compute-optimized machine family is optimized for running compute-bound applications by providing the highest performance per core.
The memory-optimized machine family has machine series that are ideal for OLAP and OLTP SAP workloads, genomic modeling, electronic design automation, and memory intensive HPC workloads. This family offers more memory per core than any other machine family, with up to 32 TB of memory.
The accelerator-optimized machine family is ideal for massively parallelized Compute Unified Device Architecture (CUDA) compute workloads, such as machine learning (ML) and high performance computing (HPC). This machine family is the optimal choice for workloads that require GPUs.
Google also offers AI Hypercomputer for creating clusters of accelerator-optimized VMs with inter-GPU communication, which are designed for running very intensive AI and ML workloads. For more information, see AI Hypercomputer overview.
ArmUse the following table to compare each machine family and determine which one is appropriate for your workload. If, after reviewing this section, you are still unsure which family is best for your workload, start with the general-purpose machine family. For details about all supported processors, see CPU platforms.
To learn how your selection affects the performance of disk volumes attached to your compute instances, see:
Compare the characteristics of different machine series, from C4A to G2. You can select specific properties in the Choose instance properties to compare field to compare those properties across all machine series in the following table.
C4 C4A C4D C3 C3D N4 N2 N2D N1 T2D T2A E2 Z3 H3 C2 C2D X4 M4 M3 M2 M1 N1+GPU A4X A4 A3 (H200) A3 (H100) A2 G4 G2 Workload type General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose General-purpose Cost optimized Storage optimized Compute optimized Compute optimized Compute optimized Memory optimized Memory optimized Memory optimized Memory optimized Memory optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Accelerator optimized Instance type VM VM VM and bare metal VM and bare metal VM VM VM VM VM VM VM VM VM and bare metal VM VM VM Bare metal VM VM VM VM VM VM VM VM VM VM VM VM CPU type Intel Emerald Rapids and Granite Rapids Google Axion AMD EPYC Turin Intel Sapphire Rapids AMD EPYC Genoa Intel Emerald Rapids Intel Cascade Lake and Ice Lake AMD EPYC Rome and EPYC Milan Intel Skylake, Broadwell, Haswell, Sandy Bridge, and Ivy Bridge AMD EPYC Milan Ampere Altra Intel Skylake, Broadwell, and Haswell, AMD EPYC Rome and EPYC Milan Intel Sapphire Rapids Intel Sapphire Rapids Intel Cascade Lake AMD EPYC Milan Intel Sapphire Rapids Intel Emerald Rapids Intel Ice Lake Intel Cascade Lake Intel Skylake and Broadwell Intel Skylake, Broadwell, Haswell, Sandy Bridge, and Ivy Bridge NVIDIA Grace Intel Emerald Rapids Intel Emerald Rapids Intel Sapphire Rapids Intel Cascade Lake AMD EPYC Turin Intel Cascade Lake Architecture x86 Arm x86 x86 x86 x86 x86 x86 x86 x86 Arm x86 x86 x86 x86 x86 x86 x86 x86 x86 x86 x86 Arm x86 x86 x86 x86 x86 x86 vCPUs 2 to 288 1 to 72 2 to 384 4 to 176 4 to 360 2 to 80 2 to 128 2 to 224 1 to 96 1 to 60 1 to 48 0.25 to 32 8 to 192 88 4 to 60 2 to 112 960 to 1,920 28 to 224 32 to 128 208 to 416 40 to 160 1 to 96 140 224 224 208 12 to 96 48 to 384 4 to 96 vCPU definition Thread Core Thread Thread Thread Thread Thread Thread Thread Core Core Thread Thread Core Thread Thread Thread Thread Thread Thread Thread Thread Core Thread Thread Thread Thread Thread Thread Memory 2 to 2,232 GB 2 to 576 GB 3 to 3,072 GB 8 to 1,408 GB 8 to 2,880 GB 2 to 640 GB 2 to 864 GB 2 to 896 GB 1.8 to 624 GB 4 to 240 GB 4 to 192 GB 1 to 128 GB 64 to 1,536 GB 352 GB 16 to 240 GB 4 to 896 GB 16,384 to 32,768 GB 372 to 5,952 GB 976 to 3,904 GB 5,888 to 11,776 GB 961 to 3,844 GB 3.75 to 624 GB 884 GB 3,968 GB 2,952 GB 1,872 GB 85 to 1,360 GB 180 to 1,440 GB 16 to 432 GB Custom machine types — — — — — — — — — — — — — — — — — — — — — — Extended memory — — — — — — — — — — — — — — — — — — — — — — — — Sole tenancy — — — — — — — — — — — Nested virtualization — — — — — — — — — — — — — — — — — Confidential Computing — — AMD SEV Intel TDX AMD SEV — — AMD SEV , AMD SEV-SNP — — — — — — — AMD SEV — — — — — — — — — Intel TDX, NVIDIA Confidential Computing — — — Disk interface type NVMe NVMe NVMe NVMe NVMe NVMe SCSI and NVMe SCSI and NVMe SCSI and NVMe SCSI and NVMe NVMe SCSI NVMe NVMe SCSI and NVMe SCSI and NVMe NVMe NVMe NVMe SCSI SCSI and NVMe SCSI and NVMe NVMe NVMe NVMe NVMe SCSI and NVMe NVMe NVMe Hyperdisk Balanced — — — — — — — — — — — Hyperdisk Balanced HA — — — — — — — — — — — — — — — — — — — — Hyperdisk Extreme — — — — — — — — — — — — Hyperdisk ML — — — — — — — — — — — — — — — — — — — — — — — — Hyperdisk Throughput — — — — — — — — — — — — — — — — — — — Local SSD — — — — — — — — Max Local SSD 18 TiB 6 TiB 12 TiB 12 TiB 12 TiB 0 9 TiB 9 TiB 9 TiB 0 0 0 36 TiB (VM), 72 TiB (Metal) 0 3 TiB 3 TiB 0 0 3 TiB 0 3 TiB 9 TiB 12 TiB 12 TiB 12 TiB 6 TiB 3 TiB 12 TiB 3 TiB Standard PD — — — — — — Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional — — Zonal Zonal — — — Zonal Zonal Zonal and Regional — — — — Zonal — — Balanced PD — — — Zonal Zonal — Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal Zonal Zonal Zonal — — Zonal Zonal Zonal Zonal and Regional — — — Zonal Zonal — Zonal SSD PD — — — Zonal Zonal — Zonal and Regional Zonal and Regional Zonal and Regional Zonal Zonal Zonal and Regional Zonal — Zonal Zonal — — Zonal Zonal Zonal Zonal and Regional — — — Zonal Zonal — Zonal Extreme PD — — — — — — — — — — — — — — — — — — — — — — — — — Network interfaces gVNIC and IDPF gVNIC gVNIC and IDPF gVNIC and IDPF gVNIC gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC gVNIC and VirtIO-Net gVNIC and IDPF gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net IDPF gVNIC gVNIC gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and VirtIO-Net gVNIC and MRDMA gVNIC and MRDMA gVNIC and MRDMA gVNIC gVNIC and VirtIO-Net gVNIC gVNIC and VirtIO-Net Network performance 10 to 100 Gbps 10 to 50 Gbps 10 to 100 Gbps 23 to 100 Gbps 20 to 100 Gbps 10 to 50 Gbps 10 to 32 Gbps 10 to 32 Gbps 2 to 32 Gbps 10 to 32 Gbps 10 to 32 Gbps 1 to 16 Gbps 23 to 100 Gbps up to 200 Gbps 10 to 32 Gbps 10 to 32 Gbps up to 100 Gbps 32 to 100 Gbps up to 32 Gbps up to 32 Gbps up to 32 Gbps 2 to 32 Gbps up to 2,000 GBps up to 3,600 Gbps up to 3,200 Gbps up to 1,800 Gbps 24 to 100 Gbps 50 to 400 Gbps 10 to 100 Gbps High-bandwidth network 50 to 200 Gbps 50 to 100 Gbps 50 to 200 Gbps 50 to 200 Gbps 50 to 200 Gbps — 50 to 100 Gbps 50 to 100 Gbps — — — — 50 to 200 Gbps — 50 to 100 Gbps 50 to 100 Gbps — 50 to 200 Gbps 50 to 100 Gbps — — 50 to 100 Gbps up to 2,000 GBps up to 3,600 Gbps up to 3,200 Gbps up to 1,800 Gbps 50 to 100 Gbps 50 to 400 Gbps 50 to 100 Gbps Max GPUs 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 4 8 8 8 16 8 8 Sustained use discounts — — — — — — — — — — — — — — — — — — — — — — Committed use discounts Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs — Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs and Flexible CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs Resource-based CUDs — Resource-based CUDs Spot VM discounts — — — — — GPUs and compute instancesGPUs are used to accelerate workloads, and are supported for A4X, A4, A3, A2, G4, G2, and N1 instances. For instances that use A4X, A4, A3, A2, G4, or G2 machine types, the GPUs are automatically attached when you create the instance. For instances that use N1 machine types, you can attach GPUs to the instance during or after instance creation. GPUs can't be used with any other machine series.
Accelerator-optimized instances have a fixed number of GPUs, vCPUs and memory per machine type, with the exception of G2 machines that offer a custom memory range. N1 instances with fewer GPUs attached are limited to a maximum number of vCPUs. In general, a higher number of GPUs lets you create instances with a higher number of vCPUs and memory. For more information, see GPUs on Compute Engine.
What's nextExcept as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-11 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-11 UTC."],[[["Compute Engine offers a variety of machine families, including general-purpose, storage-optimized, compute-optimized, memory-optimized, and accelerator-optimized, each tailored to specific workload needs."],["Machine families are categorized into machine series, generations, and machine types, where each machine type defines the resources available to a compute instance like vCPUs, memory, and disks."],["Predefined machine types offer varying vCPU to memory ratios, such as `highcpu`, `standard`, `highmem`, `megamem`, `hypermem`, and `ultramem`, catering to different resource requirements."],["Custom machine types are available for the N and E series in the general-purpose machine family and allow for more granular control over vCPU and memory configurations, though they come at a slightly higher cost."],["GPUs are supported for specific machine types in the N1, A3, A2, and G2 series to accelerate workloads, and the number of GPUs can influence the maximum vCPU and memory allocation for a VM instance."]]],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4