Stay organized with collections Save and categorize content based on your preferences.
Check if your question or problem has already been addressed on one of the following pages:
Topics in this page include:
Run the gcloud sql operations list
command to list all operations for the given Cloud SQL instance.
Look in the logs and filter by text to find the user. You may need to use audit logs for private information. Relevant log files include:
cloudsql.googleapis.com/postgres.log
cloudaudit.googleapis.com/activity
may also be available.If you delete an instance without taking a final backup of the data, then no data recovery is possible. However, if you restore the instance, then Cloud SQL also restores the backups. For more information on recovering a deleted instance, see Recovery backups.
If you have done an export operation, create a new instance and then do an import operation to recreate the database. Exports are written to Cloud Storage and imports are read from there.
Note: Cloud SQL recommends that you take a final backup of your data before you delete the instance. This way, you can recreate any instances that you delete accidentally without contacting Cloud Customer Care. An automated backup is stuck for many hours and can't be canceled. Backups can take a long time depending on the database size.If you really need to cancel the operation, you can ask customer support to force restart
the instance.
Create the database users before restoring the SQL dump.
You want to increase the number of days that you can keep automatic backups from seven to 30 days, or longer. You can configure the number of automated backups to retain, from 1 to 365. Automated backups get pruned regularly based on the retention value configured. Unfortunately, this means that the currently visible backups are the only automated backups you can restore from.To keep backups indefinitely, you can create an on-demand backup, as they are not deleted in the same way as automated backups. On-demand backups remain indefinitely. That is, they remain until they're deleted or the instance they belong to is deleted. Because that type of backup is not deleted automatically, it can affect billing.
An automated backup failed and you didn't receive an email notification. To have Cloud SQL notify you of the backup's status, configure a log-based alert. An instance is repeatedly failing because it is cycling between the failure and backup restore states. Attempts to connect to and use the database following restore fail.autovacuum
settings to clean up dead connections.Things to try:
autovacuum
.CREATE UNLOGGED TABLE ...
.
These tables are not included in a restore from a backup:
The solution is to avoid using unlogged tables if you want to restore those tables through a backup. If you're restoring from a database that already has unlogged tables, then you can dump the database to a file, and reload the data after modifying the dumped file to ALTER TABLE
to SET LOGGED
on those tables.
You can't cancel operation [operation-ID] because this operation isn't in progress.
You're trying to cancel an import or export operation that's completed, failed, or cancelled. If the operation is running, you can cancel it.
Error message:You can't cancel operation [operation-ID] because Cloud SQL doesn't support the cancellation of an [operation-type] operation.
Cloud SQL doesn't support the cancellation of the operation because it has an operation type other than IMPORT
or EXPORT
.
The [operation-type] operation isn't cancelled. Wait and retry in a few seconds.
Cloud SQL can't cancel the import or export operation at this time. Try again in a few seconds. If the problem persists, contact Google Cloud Support.
Clone Issue Troubleshooting Cloning fails withconstraints/sql.restrictAuthorizedNetworks
error. The cloning operation is blocked by the Authorized Networks
configuration. Authorized Networks
are configured for public IP addresses in the Connectivity section of the Google Cloud console, and cloning is not permitted due to security considerations.
Remove all Authorized Networks
entries from the Cloud SQL instance if you can. Otherwise, create a replica without any Authorized Networks
entries.
Failed to create subnetwork. Couldn't find free blocks in allocated IP ranges. Please allocate new ranges for this service provider. Help Token: [help-token-id].
You're trying to use the Google Cloud console to clone an instance with a private IP address, but you didn't specify the allocated IP range that you want to use and the source instance isn't created with the specified range. As a result, the cloned instance is created in a random range.
Use gcloud
to clone the instance and provide a value for the --allocated-ip-range-name
parameter. For more information, see Cloning an instance with a private IP.
Aborted connection
. The issue might be:
Applications must tolerate network failures and follow best practices such as connection pooling and retrying. Most connection poolers catch these errors where possible. Otherwise the application must either retry or fail gracefully.
For connection retry, we recommend the following methods:
Combining these methods helps reduce throttling.
Certificate verify failed
.
The client certificates have expired or the path to the certificates isn't correct.
Regenerate the certificates by recreating them.
FATAL: database 'user' does not exist
. gcloud sql connect --user
only works with the default postgres
user.
Connect with the default user, then change users.
You want to find out who is connected. Log into the database and run this command:SELECT datname, usename, application_name as appname, client_addr, state, now() - backend_start as conn_age, now() - state_change as last_activity_age FROM pg_stat_activity WHERE backend_type = 'client backend' ORDER BY 6 DESC LIMIT 20
Hostname/IP does not match certificate's altnames: Host: localhost. is not in the cert's altnames
.
The host address does not match the address in the server certificate's alternative names.
If you are using Node.js with verify-full or its equivalent, please use the DNS name for the servername parameter. The DNS name can be found in the server certificate using openssl. For example, openssl x509 -in server-cert.pem -noout -text |grep 'DNS:'
.
ssl: {
rejectUnauthorized: true,
ca: fs.readFileSync("/path/to/server/CA"),
servername: 'N-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx.us-central1.sql.goog'
}
Create instances Issue Troubleshooting Error message: Failed to create subnetwork. Couldn't find free blocks in allocated IP ranges. Please allocate new ranges for this service provider
. There are no more available addresses in the allocated IP range. There can be several possible scenarios:
To resolve this issue, you can either expand the existing allocated IP range or allocate an additional IP range to the private service connection. For more information, see Allocate an IP address range.
If you used the --allocated-ip-range-name
flag while creating the Cloud SQL instance, you may only expand the specified IP range.
If you're allocating a new range, take care that the allocation doesn't overlap with any existing allocations.
After creating a new IP range, update the vpc peering with the following command:
gcloud services vpc-peerings update \ --service=servicenetworking.googleapis.com \ --ranges=OLD_RESERVED_RANGE_NAME,NEW_RESERVED_RANGE_NAME \ --network=VPC_NETWORK \ --project=PROJECT_ID \ --force
If you're expanding an existing allocation, take care to increase only the allocation range and not decrease it. For example, if the original allocation was 10.0.10.0/24, then make the new allocation at least 10.0.10.0/23.
In general, if starting from a /24 allocation, decrementing the /mask by 1 for each condition (additional instance type group, additional region) is a good rule of thumb. For example, if trying to create both instance type groups on the same allocation, going from /24 to /23 is enough.
After expanding an existing IP range, update the vpc peering with following command:
gcloud services vpc-peerings update \ --service=servicenetworking.googleapis.com \ --ranges=RESERVED_RANGE_NAME \ --network=VPC_NETWORK \ --project=PROJECT_IDError message:
Failed to create subnetwork. Router status is temporarily unavailable. Please try again later. Help Token: [token-ID]
. Try to create the Cloud SQL instance again. Error message: HTTPError 400: Invalid request: Incorrect Service Networking config for instance: PROJECT_ID:INSTANCE_NAME:SERVICE_NETWORKING_NOT_ENABLED.
Enable the Service Networking API using the following command and try to create the Cloud SQL instance again.
gcloud services enable servicenetworking.googleapis.com \ --project=PROJECT_IDError message:
Failed to create subnetwork. Required 'compute.projects.get' permission for PROJECT_ID
. When you create an instance using with a Private IP address, a service account is created just-in-time using the Service Networking API. If you have only recently enabled the Service Networking API, then the service account might not get created and the instance creation fails. In this case, you must wait for the service account to propagate throughout the system or manually add it with the required permissions. Error message: More than 3 subject alternative names are not allowed.
You're trying to use a custom SAN to add more than three DNS names to the server certificate of a Cloud SQL instance. You can't add more than three DNS names to the instance. Error message: Subject alternative names %s is too long. The maximum length is 253 characters.
Make sure that any DNS names that you want to add to the server certificate of a Cloud SQL instance don't have more than 253 characters. Error message: Subject alternative name %s is invalid.
Verify that the DNS names that you want to add to the server certificate of a Cloud SQL instance meet the following criteria:
HTTP Error 409: Operation failed because another operation was already in progress.
There is already a pending operation for your instance. Only one operation is allowed at a time. Try your request after the current operation is complete. HTTP Error 403: The service account does not have the required permissions for the bucket.
Ensure that the bucket exists and the service account for the Cloud SQL instance (which is performing the export) has the Storage Object Creator
role (roles/storage.objectCreator
) to allow export to the bucket. See IAM roles for Cloud Storage. CSV export worked but SQL export failed. CSV and SQL formats do export differently. The SQL format exports the entire database, and likely takes longer to complete. The CSV format lets you define which elements of the database to include in the export.
Use CSV exports to export only what you need.
Export is taking too long. Cloud SQL does not support concurrent synchronous operations.Use export offloading. At a high level, in export offloading, instead of issuing an export on the source instance, Cloud SQL spins up an offload instance to perform the export. Export offloading has several advantages, including increased performance on the source instance and the unblocking of administrative operations while the export is running. With export offloading, total latency can increase by the amount of time it takes to bring up the offload instance. Generally, for reasonably sized exports, latency is not significant. However, if your export is small enough, then you may notice the increase in latency.
Create Extension error. The dump file contains references to unsupported extension.Edit the dump file to remove the references.
Error usingpg_dumpall
. Using the pg_dumpall
utility with the --global
flag requires the superuser role, but this role isn't supported in Cloud SQL for PostgreSQL. To prevent errors from occurring while performing export operations that include user names, also use the --no-role-passwords
flag. The export operation times out before exporting anything, and you see the error message Could not receive data from client: Connection reset by peer.
If Cloud Storage does not receive any data within a certain time frame, typically around seven minutes, the connection resets. It's possible the initial export query is taking too long to run.
Do a manual export using the pg_dump
tool.
You could build your own automated export system using Google Cloud products such as Cloud Scheduler, Pub/Sub, and Cloud Run functions, similar to this article on automating backups.
Flags Issue Troubleshooting You set the time zone for a session, but it expires when you log off.Connect to the database and set the database time zone to the one you want, either per user or per database.
In Cloud SQL for PostgreSQL, you can specify the following. These settings remain after a session is closed, mimicking a .conf
configuration:
ALTER DATABASE dbname SET TIMEZONE TO 'timezone'; ALTER USER username SET TIMEZONE TO 'timezone';
These settings apply only to new connections to the database. To see the change to the time zone, disconnect from the instance and then reconnect to it.
High availability Issue Troubleshooting You can't find the metrics for a manual failover. Only automatic failovers go into the metrics. Cloud SQL instance resources (CPU and RAM) are near 100% usage, causing the high availability instance to go down. The instance machine size is too small for the load.Edit the instance to upgrade to a larger machine size to get more CPUs and memory.
Import Issue Troubleshooting Error message:permission denied for schema public
For PostgreSQL versions 15 and later, if the target database is created from template0
, then importing data might fail. To resolve this issue, provide public schema privileges to the cloudsqlsuperuser
user by running the GRANT ALL ON SCHEMA public TO cloudsqlsuperuser
SQL command. HTTP Error 409: Operation failed because another operation was already in progress
. There is already a pending operation for your instance. Only one operation is allowed at a time. Try your request after the current operation is complete. The import operation is taking too long. Too many active connections can interfere with import operations.
Close unused operations. Check the CPU and memory usage of your Cloud SQL instance to make sure there are plenty of resources available. The best way to ensure maximum resources for the import is to restart the instance before beginning the operation.
A restart:
Create the database users before importing.
After importing data, the size of your data disk usage is much higher.There may be an unexpected disk usage after importing data. This usage may be because of using point-in-time recovery.
To resolve this, after you import data, disable point-in-time recovery if you want to delete logs and recover storage. Keep in mind that decreasing the storage used doesn't shrink the size of the storage provisioned for the instance.
Error message:GRANT stderr: ERROR: must be member of role ROLE_NAME
This error message appears if you try to import a SQL dump file that's uploaded in Cloud Storage to a Cloud SQL database, and the import job has run for about four days.
ROLE_NAME is a custom database role defined in the source PostgreSQL database. The default cloudsqlsuperuser
user imports the SQL dump file. However, this user might not belong to the ROLE_NAME role.
To resolve this issue, complete the following steps:
Don't use the cloudsqlsuperuser
user to import the file. Instead, in the destination database, specify a user who's a member of the ROLE_NAME role. To specify the user, run the following command:
gcloud sql import sql INSTANCE URI [--async]
[--database=DATABASE, -d DATABASE] [--user=USER] [GCLOUD_WIDE_FLAG …]
Google ML integration API is supported only on Postgres version 12 or above.
To enable the Vertex AI integration in Cloud SQL, you must have a Cloud SQL for PostgreSQL database, version 12 or later. To upgrade your database to this version, see Upgrade the database major version in-place. Error message: Google ML Integration API is not supported on shared core instance. Please upsize your machine type.
If you selected a shared core for the machine type of your instance, then you can't enable the Vertex AI integration in Cloud SQL. Upgrade your machine type to dedicated core. For more information, see Machine Type. Error message: Google ML Integration is unsupported for this maintenance version. Please follow https://cloud.google.com/sql/docs/postgres/self-service-maintenance to update the maintenance version of the instance.
To enable the Vertex AI integration in Cloud SQL, the maintenance version of your instance must be R20240130
or later. To upgrade your instance to this version, see Self-service maintenance. Error message: Cannot invoke ml_predict_row if 'cloudsql.enable_google_ml_integration' is off.
The cloudsql.enable_google_ml_integration
database flag is turned off. Cloud SQL can't integrate with Vertex AI.
To turn this flag on, use the gcloud sql instances patch
command:
gcloud sql instances patch INSTANCE_NAME --database-flags cloudsql.enable_google_ml_integration=on
Replace INSTANCE_NAME with the name of the primary Cloud SQL instance.
Error message:Failed to connect to remote host: Connection refused.
The integration between Cloud SQL and Vertex AI isn't enabled. To enable this integration, use the gcloud sql instances patch
command:
gcloud sql instances patch INSTANCE_NAME
--enable-google-ml-integration
Replace INSTANCE_NAME with the name of the primary Cloud SQL instance.
Error message:Vertex AI API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting /apis/api/aiplatform.googleapis.com/overview?project=PROJECT_ID then retry.
The Vertex AI API isn't enabled. For more information on enabling this API, see Enable database integration with Vertex AI. Error message: Permission 'aiplatform.endpoints.predict' denied on resource.
Vertex AI permissions aren't added to the Cloud SQL service account for the project where the Cloud SQL instance is located. For more information on adding these permissions to the service account, see Enable database integration with Vertex AI. Error message: Publisher Model `projects/PROJECT_ID/locations/REGION_NAME/publishers/google/models/MODEL_NAME` not found.
The machine learning model or the LLM doesn't exist in Vertex AI. Error message: Resource exhausted: grpc: received message larger than max.
The size of the request that Cloud SQL passes to Vertex AI exceeds the gRPC limit of 4 MB per request. Error message: Cloud SQL attempts to send a request to Vertex AI. However, the instance is in the %s region, but the Vertex AI endpoint is in the %s region. Make sure the instance and endpoint are in the same region.
Cloud SQL attempts to send a request to Vertex AI. However, the instance is in one region, but the Vertex AI endpoint is in a different region. To resolve this issue, both the instance and endpoint must be in the same region. Error message: The Vertex AI endpoint isn't formatted properly.
The Vertex AI endpoint isn't formatted properly. For more information, see Use private endpoints for online prediction. Error message: Quota exceeded for aiplatform.googleapis.com/online_prediction_requests_per_base_model with base model: textembedding-gecko.
The number of requests that Cloud SQL passes to Vertex AI exceeds the limit of 1,500 requests per minute per region per model per project. Logging Issue Troubleshooting Logging uses a lot of CPU and memory on your Cloud SQL instance. Logging needs to be tuned.
The log_statement
flag can be set to none and the logging_collector
flag can be set to off. If logging is still occurring, there may be other log-related flags that can be tuned. You can edit the instance to modify these flags.
For example, a user was deleted but you can't find out who did it. The logs show the operation started but don't provide any more information. You must enable audit logging for detailed and personal identifying information (PII) like this to be logged.
Log files are hard to read. You'd rather view the logs as json or text.You can use thegcloud logging read
command along with linux post-processing commands to download the logs.
To download the logs as JSON:
gcloud logging read \ "resource.type=cloudsql_database \ AND logName=projects/PROJECT_ID \ /logs/cloudsql.googleapis.com%2FLOG_NAME" \ --format json \ --project=PROJECT_ID \ --freshness="1d" \ > downloaded-log.json
To download the logs as TEXT:
gcloud logging read \ "resource.type=cloudsql_database \ AND logName=projects/PROJECT_ID \ /logs/cloudsql.googleapis.com%2FLOG_NAME" \ --format json \ --project=PROJECT_ID \ --freshness="1d"| jq -rnc --stream 'fromstream(1|truncate_stream(inputs)) \ | .textPayload' \ --order=asc > downloaded-log.txtQuery logs are not found in PostgreSQL logs. You need to enable the pgaudit flags.
gcloud sql connect INSTANCE_NAME
CREATE EXTENSION pgaudit;
gcloud sql instances patch INSTANCE_NAME \ --database-flags=cloudsql.enable_pgaudit=on,pgaudit.log=all
SELECT datname, usename, application_name as appname, client_addr, state, now() - backend_start as conn_age, now() - xact_start as xact_age, now() - query_start as query_age, now() - state_change as last_activity_age, wait_event_type, wait_event, query FROM pg_stat_activity WHERE state <> 'idle' ORDER BY 8 DESC LIMIT 20;
FIELD_NAME
):
SELECT name, setting, unit FROM pg_settings WHERE name = 'FIELD_NAME'
.
SETTING_NAME
):
SHOW SETTING_NAME;
Run SHOW ALL;
to see all settings.
pg_signal_backend
role.
Run the following commands:
GRANT pg_signal_backend TO USERNAME;
SELECT pid, usename, state, query FROM pg_stat_activity;
SELECT pg_cancel_backend(pid)
FROM pg_stat_activity
WHERE usename = 'USERNAME';
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE usename = 'USERNAME';
The autovacuum job might be blocked, or might not be reclaiming the transaction IDs fast enough to keep up with the workload.
In order to avoid any outages due to transaction wraparound problem, you can review these self-servicing tips for dealing with TXID wraparound.
For general tuning advice, see Optimizing, monitoring, and troubleshooting vacuum operations in PostgreSQL.
Temporary storage increased automatic storage. Automatic storage is enabled.Restart deletes the temporary files but not reduce the storage. Only customer support can reset the instance size.
Data is being automatically deleted. Most likely a script is running somewhere in your environment.Look in the logs around the time of the deletion and see if there's a rogue script running from a dashboard or another automated process.
The instance cannot be deleted. You might see the error messageERROR: (gcloud.sql.instances.delete) HTTP Error 409: The instance or operation is not in an appropriate state to handle the request
, or the instance may have a INSTANCE_RISKY_FLAG_CONFIG
flag status.
Some possible explanations include:
INSTANCE_RISKY_FLAG_CONFIG
warning is triggered whenever at least one beta
flag is being used. Remove the risky flag settings and restart the instanceUnfortunately, you can't shrink the ibtmp1
file by any method other than restarting the service.
One mitigation option is to create the temporary table with ROW_FORMAT=COMPRESSED
, so it is stored in file-per-table tablespaces in the temporary file directory. However, the downside is performance costs associated with creating and removing a file-per-table tablespace for each temporary table.
If your instance runs out of storage, and the automatic storage increase capability isn't enabled, your instance goes offline. To avoid this issue, you can edit the instance to enable automatic storage increase.
Your on-premises primary instance is stuck. Google Cloud can't help with instances that are not in Cloud SQL. Slow shutdown on restart. When an instance shuts down, any outstanding connections that don't end within 60 seconds make the shutdown unclean.By having connections that last less than 60 seconds, most unclean shutdowns can be avoided, including connections from the database command prompt. If you keep these connections open for hours or days, shutdowns can be unclean.
A user cannot be deleted. The user probably has objects in the database that depend on it. You need to drop those objects or reassign them to another user.Find out which objects are dependent on the user, then drop or reassign those objects to a different user.
This thread on Stack Exchange discusses how to find the objects owned by the user. Particular queries are running slow. Queries can be slow for many reasons, mostly due to specific database aspects. One reason that can involve Cloud SQL is network latency, when the source (writer or reader) resource and the destination (Cloud SQL) resource are in different regions.Refer to general performance tips in particular.
For slow database inserts, updates, or deletes, consider the following actions:
To reduce the latency the recommendation is to locate both the source and destination resources in the same region.
Out of memory is indicated but monitoring charts don't show that. An instance can fail and reportOut of memory
but the Google Cloud console or Cloud Monitoring charts seem to show there's still memory remaining.
There are other factors beside your workload that can impact memory usage, such as the number of active connections and internal overhead processes. These aren't always reflected in the monitoring charts.
Ensure that the instance has enough overhead to account for your workload plus some additional overhead.
Recovering a deleted instance. All data on an instance, including backups, is permanently lost when that instance is deleted.To preserve your data, export it to Cloud Storage before you delete an instance.
The Cloud SQL Admin role includes the permission to delete the instance. To prevent accidental deletion, grant this role only as needed.
You want to rename an existing Cloud SQL instance. Renaming an existing instance is not supported.There are other ways to accomplish the goal by creating a new instance.
In both cases, you can delete your old instance after the operation is done. We recommend going with the cloning route since it has no impact on performance and doesn't require you to redo any instance configuration settings such as flags, machine type, storage size and memory.
Error when deleting an instance. If deletion protection is enabled for an instance, confirm your plans to delete the instance. Then disable deletion protection before deleting the instance. Private Service Connect Issue Troubleshooting The service attachment of the instance doesn't accept the Private Service Connect endpoint.To check the status, use the gcloud compute forwarding-rules describe
command.
gcloud compute forwarding-rules describe ENDPOINT_NAME \ --project=PROJECT_ID \ --region=REGION_NAME \ | grep pscConnectionStatus
Make the following replacements:
Before using any of the request data, make the following replacements:
HTTP method and URL:
GET https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_NAME/forwardingRules/ENDPOINT_NAME
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Execute the following command:
curl -X GET \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_NAME/forwardingRules/ENDPOINT_NAME"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_NAME/forwardingRules/ENDPOINT_NAME" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "kind": "compute#forwardingRule", "id": "ENDPOINT_ID", "creationTimestamp": "2024-05-09T12:03:21.383-07:00", "name": "ENDPOINT_NAME", "region": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_NAME", "IPAddress": "IP_ADDRESS", "target": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_NAME/serviceAttachments/SERVICE_ATTACHMENT_NAME", "selfLink": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_NAME/forwardingRules/ENDPOINT_NAME", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/default", "serviceDirectoryRegistrations": [ { "namespace": "goog-psc-default" } ], "networkTier": "PREMIUM", "labelFingerprint": "LABEL_FINGERPRINT_ID", "fingerprint": "FINGERPRINT_ID", "pscConnectionId": "CONNECTION_ID", "pscConnectionStatus": "ACCEPTED", "allowPscGlobalAccess": true }
ACCEPTED
. If the status is PENDING
, then the instance isn't allowing the Google Cloud project that contains the endpoint. Make sure that the network project in which the endpoint is created is allowed. For more information, see Edit an instance with Private Service Connect enabled.ERROR: (gcloud.compute.forwarding-rules.create) Could not fetch resource: The resource 'projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME' was not found
This error message can occur when reserving a static internal IP address for the Private Service Connect endpoint. Make sure the subnet specified exists in the project specified by the URI. If you want to create an endpoint in a service project but use a subnet from a Shared VPC network, you need to specify the subnet by its URI and use the host project's project ID in the URI. For more information, see Create the endpoint manually. ERROR: (gcloud.compute.forwarding-rules.create) Could not fetch resource: - The resource 'projects/PROJECT_ID/global/networks/NETWORK_NAME' was not found
This error message can occur when you create a Private Service Connect endpoint manually. Make sure the network specified exists in the project specified by the URI. If you want to create an endpoint in a service project but use a Shared VPC network, you need to specify the network by its URI and use the host project's project ID in the URI. For more information, see Create the endpoint manually. Replication Issue Troubleshooting Read replica didn't start replicating on creation. There's probably a more specific error in the log files. Inspect the logs in Cloud Logging to find the actual error. Unable to create read replica - invalidFlagValue error. One of the flags in the request is invalid. It could be a flag you provided explicitly or one that was set to a default value.
First, check that the value of the max_connections
flag is greater than or equal to the value on the primary.
If the max_connections
flag is set appropriately, inspect the logs in Cloud Logging to find the actual error.
If the error is: set Service Networking service account as servicenetworking.serviceAgent role on consumer project
, then disable and re-enable the Service Networking API
. This action creates the service account necessary to continue with the process.
pg_replication_slots
system view and filtering on the active
column. Unused slots can be dropped to remove WAL segments using the pg_drop_replication_slot
command. The replica instance is using too much memory. The replica uses temporary memory to cache often-requested read operations, which can lead it to use more memory than the primary instance.
Restart the replica instance to reclaim the temporary memory space.
Replication stopped. The maximum storage limit was reached and automatic storage increase isn't enabled. Edit the instance to enable automatic storage increase
.
DELETE ... WHERE field < 50000000
cause replication lag with row-based replication since a huge number of updates are piled up on the replica.Some possible solutions include:
If you must use hash indexes, upgrade to PostgreSQL 10+. Otherwise, if you also want to use replicas, don't use hash indexes in PostgreSQL 9.6.
Query on the primary instance is always running. After creating a replica, the querySELECT * from pg_stat_activity where state = 'active' and pid = XXXX and username = 'cloudsqlreplica'
is expected to run continuously on your primary instance. Replica creation fails with timeout. Long-running uncommitted transactions on the primary instance can cause read replica creation to fail.
Recreate the replica after stopping all running queries.
If the primary instance and the replica have different vCPU sizes, then there might be query performance issues because the query optimizer takes vCPU sizes into account.To resolve this issue, complete the following steps:
log_statement
parameter to ddl
. This provides you with both the queries and the run time on the database. However, depending on your workload, this might cause performance issues.explain analyze
for the queries.If this is a specific query, then modify the query. For example, you can change the order of the joins to see if you get better performance.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-07-09 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-07-09 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4