Linux Windows
This page describes known issues that you might run into while using Compute Engine. For issues that specifically affect Confidential VMs, see Confidential VM limitations.
General issuesThe following issues provide troubleshooting guidance or general information.
You can't use the Google Cloud console to create Spot VMs for A4 and A3 UltraYou can't create a Spot VM that uses the A4 or A3 Ultra machine series using the Google Cloud console. Specifically, on the Create an instance page, after you select a GPU type for these machine series in the Machine configuration pane, the Advanced pane states that A reservation is required and doesn't let you select Spot in the VM provisioning model list.
To workaround the issue, use the gcloud CLI or REST to create Spot VMs for A4 and A3 Ultra. Alternatively, you can use the Google Cloud console to create A4 and A3 Ultra VMs that use the reservation-bound provisioning model.
Modifying the IOPS or throughput on an Asynchronous Replication primary disk using thegcloud compute disks update
command causes a false error
When you use the gcloud compute disks update
command to modify the IOPS and throughput on an Asynchronous Replication primary disk, the Google Cloud CLI shows an error message even if the update was successful.
To accurately verify that an update was successful, use the the Google Cloud CLI or the Google Cloud console to see if the disk properties show the new IOPS and throughput values. For more information, see View the provisioned performance settings for Hyperdisk.
The metadata server might display oldphysicalHost
VM metadata
After experiencing a host error which moves an instance to a new host, when you query the metadata server, it might display the physicalHost
metadata of the instance's previous host.
To workaround this issue, do one of the following:
instances.get
method or the gcloud compute instances describe
command to retrieve the correct physicalHost
information.physicalHost
information in the metadata server.physicalHost
information to be updated.baseInstanceName
values in managed instance groups (MIGs) can cause disk name conflicts
In a MIG, disk name conflicts can occur if the instance template specifies disks to be created upon VM creation and the baseInstanceName
value exceeds 54 characters. This happens because Compute Engine generates disk names using the instance name as a prefix.
When generating disk names, if the resulting name exceeds the resource name limit of 63 characters, Compute Engine truncates the excess characters from the end of instance name. This truncation can lead to the creation of identical disk names for instances that have similar naming patterns. In such a case, the new instance will attempt to attach the existing disk. If the disk is already attached to another instance, the new instance creation fails. If the disk is not attached or is in multi-writer mode, the new instance will attach the disk, potentially leading to data corruption.
To avoid disk name conflicts, keep the baseInstanceName
value to a maximum length of 54 characters.
If you use an instance template that specifies an A2, C3, or G2 machine type to create a reservation, or to create and submit a future reservation request for review, you encounter issues. Specifically:
Creating the reservation might fail. If it succeeds, then one of the following applies:
If you created an automatically consumed reservation (default), creating VMs with matching properties won't consume the reservation.
If you created a specific reservation, creating VMs to specifically target the reservation fails.
Creating the future reservation request succeeds. However, if you submit it for review, Google Cloud declines your request.
You can't replace the instance template used to create a reservation or future reservation request, or override the template's VM properties. If you want to reserve resources for A2, C3, or G2 machine types, do one of the following instead:
Create a new single-project or shared reservation by specifying properties directly.
Create a new future reservation request by doing the following:
If you want to stop an existing future reservation request from restricting the properties of the future reservation requests you can create in your current project—or in the projects the future reservation request is shared with—delete the future reservation request.
Create a single-project or shared future reservation request by specifying properties directly and submit it for review.
-lssd
machine types with Google Kubernetes Engine
When using the Google Kubernetes Engine API, the node pool with Local SSD attached that you provision must have the same number of SSD disks as the selected C4, C3, or C3D machine type. For example, if you plan to create a VM that uses the c3-standard-8-lssd
there must be 2 SSD disks, whereas for a c3d-standard-8-lssd
, just 1 SSD disk is required. If the disk number doesn't match you will get a Local SSD misconfiguration error from the Compute Engine control plane. See Machine types that automatically attach Local SSD disks to select the correct number of Local SSD disks based on the lssd
machine type.
Using the Google Kubernetes Engine Google Cloud console to create a cluster or node pool with c4-standard-*-lssd
, c4-highmem-*-lssd
, c3-standard-*-lssd
and c3d-standard-*-lssd
VMs results in node creation failure or a failure to detect Local SSDs as ephemeral storage.
C3D VMs larger than 30 vCPUs might experience single flow TCP throughput variability and occasionally be limited to 20-25 Gbps. To achieve higher rates, use multiple tcp flows.
The CPU utilization observability metric is incorrect for VMs that use one thread per coreIf your VM's CPU uses one thread per core, the CPU utilization Cloud Monitoring observability metric in the Compute Engine > VM instances > Observability tab only scales to 50%. Two threads per core is the default for all machine types, except Tau T2D. For more information, see Set number of threads per core.
To view your VM's CPU utilization normalized to 100%, view CPU utilization in Metrics Explorer instead. For more information, see Create charts with Metrics Explorer.
Google Cloud console SSH-in-browser connections might fail if you use custom firewall rulesIf you use custom firewall rules to control SSH access to your VM instances, you might not be able to use the SSH-in-browser feature.
To work around this issue, do one of the following:
Enable Identity-Aware Proxy for TCP to continue connecting to VMs using the SSH-in-browser Google Cloud console feature.
Recreate the default-allow-ssh
firewall rule to continue connecting to VMs using SSH-in-browser.
Connect to VMs using the Google Cloud CLI instead of SSH-in-browser.
During virtual machine (VM) instance updates initiated using the gcloud compute instances update
command or the instances.update
API method, Compute Engine might temporarily change the name of your VM's disks, by adding of the following suffixes to the original name:
-temp
-old
-new
Compute Engine removes the suffix and restores the original disk names as the update completes.
Increased latency for some Persistent Disks caused by disk resizingIn some cases, resizing large Persistent Disks (~3 TB or larger) might be disruptive to the I/O performance of the disk. If you are impacted by this issue, your Persistent Disks might experience increased latency during the resize operation. This issue can impact Persistent Disks of any type.
Your automated processes might fail if they use API response data about your resource-based commitment quotasYour automated processes that consume and use API response data about your Compute Engine resource-based commitment quotas might fail if each of the following things happen. Your automated processes can include any snippets of code, business logic, or database fields that use or store the API responses.
The response data is from any of the following Compute Engine API methods:
You use an int
instead of a number
to define the field for your resource quota limit in your API response bodies. You can find the field in the following ways for each method:
items[].quotas[].limit
for the compute.regions.list
method.quotas[].limit
for the compute.regions.get
method.quotas[].limit
for the compute.projects.get
method.You have unlimited default quota available for any of your Compute Engine committed SKUs.
For more information about quotas for commitments and committed SKUs, see Quotas for commitments and committed resources.
When you you have limited quota, if you define the items[].quotas[].limit
or quotas[].limit
field as an int
type, the API response data for your quota limits might still fall within the range for int
type and your automated process might not get disrupted. But when the default quota limit is unlimited, Compute Engine API returns a value for the limit
field that falls outside of the range defined by int
type. Your automated process can't consume the value returned by the API method and fails as a result.
You can work around this issue and continue generating your automated reports in the following ways:
Recommended: Follow the Compute Engine API reference documentation and use the correct data types for the API method definitions. Specifically, use the number
type to define the items[].quotas[].limit
and quotas[].limit
fields for your API methods.
Decrease your quota limit to a value under 9,223,372,036,854,775,807. You must set quota caps for all projects that have resource-based commitments, across all regions. You can do this in one of the following ways:
items[].quotas[].limit
field for compute.regions.list
method and change it to number
type. To return the default quota limits for your committed SKUs back to their unlimited value, you must remove the quota limit caps.
C4D bare metal instances can't run the SUSE Linux Enterprise Server (SLES) version 15 SP6 OS.
Workaround
Use SLES 15 SP5 instead.
Issues related to using Dynamic Network InterfacesThis section describes known issues related to using multiple network interfaces and Dynamic Network Interfaces.
Packet loss with custom MTU sizesA Dynamic NIC with a parent vNIC might experience packet loss with custom MTU sizes.
WorkaroundTo avoid packet loss, use one of the following MTU sizes:
On an instance, reusing a VLAN ID for a new Dynamic NIC has firewall connection tracking implications. If you delete a Dynamic NIC and create a replacement Dynamic NIC that uses the same VLAN ID, firewall connection tracking table entries aren't automatically cleared. The following example shows the relevant security considerations:
4
connected to a subnet in the network-1
VPC network.4
but connects to a subnet in a different VPC network, network-2
.network-2
VPC network can send packets whose sources match 192.0.2.7:443, and the compute instance accepts them without needing to evaluate ingress firewall rules.For more information about connection tracking and firewalls rules, see Specifications in the Cloud Next Generation Firewall documentation.
SolutionOn a per-instance basis, if you need to create a Dynamic NIC that uses the same VLAN ID as a Dynamic NIC that was removed from the instance:
Packet interception when using Dynamic NIC can result in dropped packets. Dropped packets can happen when the pipeline is terminated early. The issue affects both session-based and non-session-based modes.
Root causeDropped packets occur during packet interception when the pipeline is terminated early (ingress intercept and egress reinject). The early termination causes the VLAN ID to be missing from the ingress packet's Ethernet header. Because the egress packet is derived from the modified ingress packet, the egress packet also lacks the VLAN ID. This leads to incorrect endpoint index selection and subsequent packet drops.
WorkaroundDon't use Google Cloud features that rely on packet intercept, such as firewall endpoints.
Known issues for Linux VM instancesThese are the known issues for Linux VMs.
Ubuntu VMs that uses OS Image version v20250530 show incorrect FQDNYou might see an incorrect Fully Qualified Domain Name (FQDN) with the addition of .local
suffix when you do one of the following:
20250328.00
of the google-compute-engine
package.v20250530
. For example, projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20250530
.If you experience this issue, you might see a FQDN similar to the following:
[root@ubuntu2204 ~]# apt list --installed | grep google
...
google-compute-engine/noble-updates,now 20250328.00-0ubuntu2~24.04.0 all [installed]
...
[root@ubuntu2204 ~]# curl "http://metadata.google.internal/computeMetadata/v1/instance/image" -H "Metadata-Flavor: Google"
projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20250530
[root@ubuntu2204 ~]# hostname -f
ubuntu2204.local
Root cause
On all Ubuntu images with version v20250530
, the guest-config
package version 20250328.00
adds local
to the search path due to the introduction of a new configuration file: https://github.com/GoogleCloudPlatform/guest-configs/blob/20250328.00/src/etc/systemd/resolved.conf.d/gce-resolved.conf
[root@ubuntu2204 ~]# cat /etc/resolv.conf
# This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8).
# Do not edit.
...
nameserver 127.0.0.53
options edns0 trust-ad
search local ... google.internal
The presence of this local
entry within the search path in the /etc/resolv.conf
file results in a .local
element being appended to the hostname when a FQDN is requested.
Note that version 20250501 of guest-configs
already fixes the issue but Canonical hasn't incorporated the fix into their images yet.
/etc/systemd/resolved.conf.d/gce-resolved.conf
by changing Domains=local
to Domains=~local
systemctl restart systemd-resolved
local
is removed from the search path in /etc/resolv.conf
Confirm the FQDN by using the command hostname -f
[root@ubuntu2204 ~]# hostname -f
ubuntu2204.us-central1-a.c.my-project.internal
After changing a SUSE Linux Enterprise VM's instance type it can fail to boot with the following error seen repeating in the serial console:
Starting [0;1;39mdracut initqueue hook[0m...
[ 136.146065] dracut-initqueue[377]: Warning: dracut-initqueue: timeout, still waiting for following initqueue hooks:
[ 136.164820] dracut-initqueue[377]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fdisk\x2fby-uuid\x2fD3E2-0CEB.sh: "[ -e "/dev/disk/by-uuid/D3E2-0CEB" ]"
[ 136.188732] dracut-initqueue[377]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fdisk\x2fby-uuid\x2fe7b218a9-449d-4477-8200-a7bb61a9ff4d.sh: "if ! grep -q After=remote-fs-pre.target /run/systemd/generator/systemd-cryptsetup@*.service 2>/dev/null; then
[ 136.220738] dracut-initqueue[377]: [ -e "/dev/disk/by-uuid/e7b218a9-449d-4477-8200-a7bb61a9ff4d" ]
[ 136.240713] dracut-initqueue[377]: fi"
Root cause
SUSE creates its cloud images with a versatile initramfs
(initial RAM filesystem) that supports various instance types. This is achieved by using the --no-hostonly --no-hostonly-cmdline -o multipath
flags during the initial image creation. However, when a new kernel is installed or the initramfs
is regenerated, which happens during system updates, these flags are omitted by default. This results in a smaller initramfs
tailored specifically for the current system's hardware, potentially excluding drivers needed for other instance types.
For example, C3 instances use NVMe drives, which require specific modules to be included in the initramfs
. If a system with an initramfs
lacking these NVMe modules is migrated to a C3 instance, the boot process fails. This issue can also affect other instance types with unique hardware requirements.
Before changing the machine type, regenerate the initramfs
with all drivers:
dracut --force --no-hostonly
If the system is already impacted by the issue create a temporary rescue VM. Use the chroot
command to access the impacted VM's boot disk then regenerate the initramfs
using the following command:
dracut -f --no-hostonly
Lower IOPS performance for Local SSD on Z3 with SUSE 12 images
Z3 VMs on SUSE Linux Enterprise Server (SLES) 12 images have significantly less than expected performance for IOPS on Local SSD disks.
Root causeThis is an issue within the SLES 12 codebase.
WorkaroundA patch from SUSE to fix this issue is not available or planned. Instead, you should use the SLES 15 operating system.
RHEL 7 and CentOS VMs lose network access after rebootIf your CentOS or RHEL 7 VMs have multiple network interface cards (NICs) and one of these NICs doesn't use the VirtIO interface, then network access might be lost on reboot. This happens because RHEL doesn't support disabling predictable network interface names if at least one NIC doesn't use the VirtIO interface.
Resolution
Network connectivity can be restored by stopping and starting the VM until the issue resolves. Network connectivity loss can be prevented from reoccurring by doing the following:
Edit the /etc/default/grub
file and remove the kernel parameters net.ifnames=0
and biosdevname=0
.
Regenerate the grub configuration.
Reboot the VM.
The following issue was resolved on January 13, 2025.
Public Google Cloud SUSE images don't include the required udev configuration to create symlinks for C3 and C3D Local SSD devices.
Resolution
To add udev rules for SUSE and custom images, see Symlinks not created C3 and C3D with Local SSD.
repomd.xml signature couldn't be verifiedOn Red Hat Enterprise Linux (RHEL) or CentOS 7 based systems, you might see the following error when trying to install or update software using yum. This error shows that you have an expired or incorrect repository GPG key.
Sample log:
[root@centos7 ~]# yum update ... google-cloud-sdk/signature | 1.4 kB 00:00:01 !!! https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for google-cloud-sdk Trying other mirror. ... failure: repodata/repomd.xml from google-cloud-sdk: [Errno 256] No more mirrors to try. https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for google-cloud-sdk
Resolution
To fix this, disable repository GPG key checking in the yum repository configuration by setting repo_gpgcheck=0
. In supported Compute Engine base images this setting might be found in /etc/yum.repos.d/google-cloud.repo
file. However, your VM can have this set in different repository configuration files or automation tools.
Yum repositories don't usually use GPG keys for repository validation. Instead, the https
endpoint is trusted.
To locate and update this setting, complete the following steps:
Look for the setting in your /etc/yum.repos.d/google-cloud.repo
file.
cat /etc/yum.repos.d/google-cloud.repo [google-compute-engine] name=Google Compute Engine baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el7-x86_64-stable enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg [google-cloud-sdk] name=Google Cloud SDK baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Change all lines that say repo_gpgcheck=1
to repo_gpgcheck=0
.
sudo sed -i 's/repo_gpgcheck=1/repo_gpgcheck=0/g' /etc/yum.repos.d/google-cloud.repo
Check that the setting is updated.
cat /etc/yum.repos.d/google-cloud.repo [google-compute-engine] name=Google Compute Engine baseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-el7-x86_64-stable enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg [google-cloud-sdk] name=Google Cloud SDK baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
On some instances that use OS Login, you might receive the following error message after the connection is established:
/usr/bin/id: cannot find name for group ID 123456789
Resolution
Ignore the error message.
Known issues for Windows VM instanceskms.windows.googlecloud.com
and stop functioning if they don't initially authenticate within 30 days. Software activated by the KMS must reactivate every 180 days, but the KMS attempts to reactivate every 7 days. Make sure to configure your Windows instances so that they remain activated.Windows Server 2025 and Windows 11 24h2 are unable to resume when suspended. Until this issue is resolved, the suspend and resume feature won't be supported for Windows Server 2025 and Windows 11 24h2.
Errors when measuring NTP time drift using w32tm on Windows VMsFor Windows VMs on Compute Engine running VirtIO NICs, there is a known bug where measuring NTP drift produces errors when using the following command:
w32tm /stripchart /computer:metadata.google.internal
The errors appear similar to the following:
Tracking metadata.google.internal [169.254.169.254:123]. The current time is 11/6/2023 6:52:20 PM. 18:52:20, d:+00.0007693s o:+00.0000285s [ * ] 18:52:22, error: 0x80072733 18:52:24, d:+00.0003550s o:-00.0000754s [ * ] 18:52:26, error: 0x80072733 18:52:28, d:+00.0003728s o:-00.0000696s [ * ] 18:52:30, error: 0x80072733 18:52:32, error: 0x80072733
This bug only impacts Compute Engine VMs with VirtIO NICs. VMs that use gVNIC don't encounter this issue.
To avoid this issue, Google recommends using other NTP drift measuring tools, such as the Meinberg Time Server Monitor.
Inaccessible boot device after updating a VM from Gen 1 or 2 to a Gen 3+ VMWindows Server binds the boot drive to its initial disk interface type upon first startup. To change an existing VM from an older machine series that uses a SCSI disk interface to a newer machine series that uses an NVMe disk interface, perform a Windows PnP driver sysprep before shutting down the VM. This sysprep only prepares device drivers and verifies that all disk interface types are scanned for the boot drive on the next start.
To update the machine series of a VM, do the following:
From a Powershell prompt as Administrator
run:
PS C:\> start rundll32.exe sppnp.dll,Sysprep_Generalize_Pnp -wait
If the new VM doesn't start correctly, change the VM back to the original machine type in order to get your VM running again. It should start successfully. Review the migration requirements to verify that you meet them. Then retry the instructions.
Limited bandwidth with gVNIC on Windows VMsThe packaged gVNIC driver on the Windows images offered by Compute Engine can achieve up to 50 Gbps of network bandwidth on Windows Instances, for both standard network performance and per VM Tier_1 networking performance. An updated version of the gVNIC driver can deliver line-rate performance (up to 200 Gbps) and support for Jumbo frames.
The updated driver is only available for third generation and later machine series (excluding N4). For more information, see Update the gVNIC version on Windows OS.
Limited disk count attachment for newer VM machine seriesVMs running on Microsoft Windows with the NVMe disk interface, which includes T2A and all third-generation VMs, have a disk attachment limit of 16 disks. This limitation does not apply to fourth-generation VMs (c4, m4). To avoid errors, consolidate your Persistent Disk and Hyperdisk storage to a maximum of 16 disks per VM. Local SSD storage is excluded from this issue.
If you want to use NVMe on a VM that uses Microsoft Windows, and the VM was created prior to May 1, 2022, you must update the existing NVMe driver in the Guest OS to use the Microsoft StorNVMe driver.
You must update the NVMe driver on your VM before you change the machine type to a third generation machine series, or before creating a boot disk snapshot that will be used to create new VMs that use a third generation machine series.
Use the following commands to install the StorNVME driver package and remove the community driver, if it's present in the guest OS.
googet update
googet install google-compute-engine-driver-nvme
Lower performance for Local SSD on Microsoft Windows with C3 and C3D VMs
Local SSD performance is limited for C3 and C3D VMs running Microsoft Windows.
Performance improvements are in progress.
Poor networking throughput when using gVNICWindows Server 2022 and Windows 11 VMs that use gVNIC driver GooGet package version 1.0.0@44
or earlier might experience poor networking throughput when using Google Virtual NIC (gVNIC).
To resolve this issue, update the gVNIC driver GooGet package to version 1.0.0@45
or later by doing the following:
Check which driver version is installed on your VM by running the following command from an administrator Command Prompt or Powershell session:
googet installed
The output looks similar to the following:
Installed packages: ... google-compute-engine-driver-gvnic.x86_64 VERSION_NUMBER ...
If the google-compute-engine-driver-gvnic.x86_64
driver version is 1.0.0@44
or earlier, update the GooGet package repository by running the following command from an administrator Command Prompt or Powershell session:
googet update
C4 machine types with more than 144 vCPUS and C4D and C3D machine types with more than 180 vCPUs don't support Windows Server 2012 and 2016 OS images. Larger C4, C4D, and C3D machine types that use Windows Server 2012 and 2016 OS images will fail to boot. To workaround this issue, select a smaller machine type or use another OS image.
C3D VMs created with 360 vCPUs and Windows OS images will fail to boot. To work around this issue, select a smaller machine type or use another OS image.
C4D VMs created with more than 255 vCPUs and Windows 2025 will fail to boot. To work around this issue, select a smaller machine type or use another OS image.
Generic disk error on Windows Server 2016 and 2012 R2 for M3, C3, C3D, and C4D VMs Warning: Windows Server 2012 R2 is no longer supported and is not recommended for use. Upgrade to a supported version of Windows Server.The ability to add or resize a Hyperdisk or Persistent Disk for a running M3, C3, C3D, or C4D VM doesn't work as expected on specific Windows guests at this time. Windows Server 2012 R2 and Windows Server 2016, and their corresponding non-server Windows variants, don't respond correctly to the disk attach and disk resize commands.
For example, removing a disk from a running M3 VM disconnects the disk from a Windows Server instance without the Windows operating system recognizing that the disk is gone. Subsequent writes to the disk return a generic error.
Resolution
You must restart the M3, C3, C3D, or C4D VM running on Windows after modifying a Hyperdisk or Persistent Disk for the disk modifications to be recognized by these guests.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4