Stay organized with collections Save and categorize content based on your preferences.
This page provides best practices for getting the best performance, durability, and availability from Cloud SQL.
If issues occur with your Cloud SQL instance, review the following during troubleshooting:
Instance configuration and administration Best practice More information Read and follow the operational guidelines to ensure that your instances are covered by the Cloud SQL SLA. Configure a maintenance window for your primary instance to control when disruptive updates can occur. See Maintenance window. If you delete and recreate instances regularly, use a timestamp in the instance ID to increase the likelihood that new instance IDs are usable. Don't start an administrative operation before the previous operation has completed.Cloud SQL instances won't accept new operation requests until they have completed the previous operation. If you attempt to start a new operation prematurely, the operation request fails. This includes instance restarts.
The instance status in the Google Cloud console does not reflect whether an operation is running. The green check mark denotes only that the instance is in the RUNNABLE
state. To see whether an operation is running, go to the Operations tab and check the status of the most recent operation.
If the enable automatic storage increases instance setting is disabled or the automatic storage increase limit is enabled, ensure you have at least 20% available space to accommodate any critical database maintenance operations that Cloud SQL may perform.
To get alerted on available disk space falling below 20%, create a metrics-based alerting policy for the disk utilization metric with an above threshold position and a value of .8. For more information, see Create metrics-based alert policies.
Prevent over-utilization of your CPU.You can view the percentage of available CPU that your instance is using on the instance details page in the Google Cloud console. For more information, see Metrics. You can also monitor your CPU usage and receive alerts at a specified threshold using Create metric-threshold alerting policies.
To avoid over-utilization, you can increase the number of CPUs for your instance. Changing CPUs requires an instance restart. If your instance is already at the maximum number of CPUs, you must shard your database to multiple instances.
Avoid memory exhaustion.When looking for signs of memory exhaustion, you should primarily use the usage metric. To avoid out-of-memory errors, we recommend that this metric remains below 90%.
You can also use the total_usage metric to observe the percentage of available memory that your Cloud SQL instance is using, including memory used by the database container and memory allocated by the operating system cache.
By observing the difference between the two metrics, you can identify how much memory is used by processes versus how much is used by the operating system cache. You can repurpose the memory in this cache.
To predict out-of-memory issues, check both metrics and interpret them together. If the metrics appear high, the instance might be low in memory. This can be because of a custom configuration, the instance being undersized for the workload, or a combination of these factors.
Scale your Cloud SQL instance to increase the size of its memory. Changing the instance's memory size requires an instance restart. If your instance is already at the maximum memory size, you must shard your database across multiple instances. To learn more about monitoring both metrics in the Google Cloud console, see Metrics.
Set SQL Server settings so that they work optimally for Cloud SQL. See SQL Server settings. Tune the instance optimally for test runs. The following table lists configuration values suitable for test runs.40
262144 MB
8
120
8
. Pre-sized to prevent autogrowth.64-128
MB. Presized to prevent autogrowth.>= 4TB
for the best IOPSTest a variety of I/O types and sizes. The size of the I/O issued to the persistent disk storage coming from SQL Server affects the IOPS and throughput. The SQL Server workload is throttled when it reaches the IOPS limit or the throughput limit. The storage type being used in Cloud SQL is persistent disk SSD, which is suitable for high-performance enterprise-level workloads.
Customize the VM to maximize performance as follows:
Set autogrow
in MBs rather than as a percentage, using increments appropriate to the requirement. Also, proactively manage the growth before autogrowth kicks in.
Additionally, ensure that the Cloud SQL Enable automatic storage increases feature is enabled so Cloud SQL can add storage space if the database and the instance run out of space.
Detect database integrity issues by runningDBCC CHECKDB
at least once a week. DBCC CHECKDB
checks the integrity of all the objects in a database. By running DBCC CHECKDB
weekly, you can ensure that your databases aren't corrupt. DBCC CHECKDB
is a resource-intensive operation that can impact your instance's performance.
DBCC CHECKDB
on a production server.
DBCC CHECKDB
on a production server:
DBCC CHECKDB
on the clone database.DBCC CHECKDB
on the restored instance's databases. For more information about restoring an instance, see Restore an instance.Use the following code snippets to run DBCC CHECKDB
on a database:
DBCC CHECKDB
with EXTENDED_LOGICAL_CHECKS
. This is a comprehensive but more resource-intensive check.
USE DATABASE_NAME DBCC CHECKDB WITH EXTENDED_LOGICAL_CHECKS, DATA_PURITY,NO_INFOMSGS, ALL_ERRORMSGS
DBCC CHECKDB
with PHYSICAL_ONLY
:
USE DATABASE_NAME DBCC CHECKDB WITH PHYSICAL_ONLY, NO_INFOMSGS, ALL_ERRORMSGS
To keep your data safe and your applications running smoothly with In-Memory OLTP tables, consider applying the following best practices:
Best practice More information Adhere to Microsoft Best Practices for memory-optimized tablesFor more information about limitations, see unsupported SQL Server Features for In-Memory OLTP.
Security Best practice More information Prefer Private IP Unless Public IP access is required, prefer using Private IP. This will help minimize unauthorized network connections to your database. Avoid 0.0.0.0/0 in Authorized Networks Avoid including 0.0.0.0/0 in Authorized Networks as this permits access from the global internet without restriction. Avoid excessively large Authorized Networks Avoid using small CIDR prefixes in Authorized Networks as this permits access from a potentially excessive number of hosts. We recommend a CIDR prefix no smaller than /16, and preferably greater than /19. Data architecture Best practice More information Split your large instances into smaller instances, where possible. When possible, using many smaller Cloud SQL instances is better than one large instance. Managing a large, monolithic instance presents challenges not posed by a group of smaller instances. Don't use too many database tables.Keep your instance's table count to fewer than 10,000. Too many database tables can impact database upgrade time.
Database collation Whether you're installing a new instance of SQL Server, restoring a database backup, or connecting a server to client databases, it's important to understand the locale requirements, sorting order, and case and accent sensitivity of the data that you're working with. When you select a collation for your server, database, column, or expression, you're assigning certain characteristics to your data. These characteristics affect the results of many operations in the database. For example, when you construct a query by usingORDER BY
, the sort order of your result set might depend on the collation that's applied to the database or dictated in a COLLATE
clause at the expression level of the query. Read more about database collations and unicode support. Query design For optimal database or query performance, make sure you are not using a large number of tables within the same query (sixteen or more). Query monitoring Queries might degrade over time. It's important to monitor your application and query performance over time. One reason for such degradation is hash bailouts.
Backups and exports are ways to provide data redundancy and protection. They each protect against different scenarios and complement each other in a robust data protection strategy.
Backups are lightweight; they provide a way to restore the data on your instance to its state at the time you took the backup. However, backups have some limitations. If you delete the instance, the backups are also deleted. You can't back up a single database or table. And if the region where the instance is located is unavailable, you cannot restore the instance from that backup, even in an available region.
Exports take longer to create, because an external file is created in Cloud Storage that can be used to recreate your data. Exports are unaffected if you delete the instance. In addition, you can export only a single database or even table, depending on the export format you choose.
When using the export backup feature on an Enterprise or Standard SQL Server instance, avoid creating a GZ archive file because it tries to compress a backup that's already natively compressed by SQL Server.
Protect your instance and backups from accidental deletion.A Cloud SQL instance that you create in the Google Cloud console or via Terraform enables accidental deletion prevention by default.
Use the export feature in Cloud SQL to export your data for additional protection. Use Cloud Scheduler with the REST API to automate export management. For more advanced scenarios, Cloud Scheduler with Cloud Run functions for automation.
SQL Server settingsSome SQL Server settings are recommended for Cloud SQL. The following topics describe some recommendations.
Global configuration settings Setting Recommendationmax worker threads
Retain the default value of 0. This setting defines the number of threads available to SQL Server based on the number of CPUs. The value is automatically calculated by the SQL Server engine at startup. max server memory (mb)
This flag limits the amount of memory that Cloud SQL can allocate for its internal pools.
We recommend you let Cloud SQL manage the value of this flag. If you must manually manage this value, use the formula outlined later in this section.
If you don't set a value for this flag, then Cloud SQL manages the value automatically, based on the size of the RAM for your instance. If you don't set a value for the flag and you resize your instance, then Cloud SQL adjusts the value of the flag automatically to meet our recommendations for the new instance size. This resize operation also removes any manually set value for this flag. This helps your database utilize resources more effectively by helping prevent overallocation, reducing the likelihood of a crash due to out-of-memory issues, and helping to avoid performance degradation for your instance.
If you prefer to manage the value for this flag, then set it manually. As a result, Cloud SQL deactivates automatic management. If you're resizing your instance, then the manually entered value is discarded. Resizing the instance reactivates automatic flag management. If, after the resize, you want to manually control the flag value, you'll need to enter a new value to reactivate manual control. Consider revisiting the value to match the recommended values for a new size.
If you must manually manage the flag's value, we recommend using the following formula to set the max server memory (mb)
database flag:
For example, if the RAM for your instance is 104 GB
(106496 MB), then reserve:
For this example, you must reserve 16.4 GB of memory. As a result, for the value of this flag, specify 89702
MB
[(104-16.4) * 1024 = 89702].
The following table has recommended values and percentages of total RAM for some popular virtual machine (VM) tiers:
Instance tier (MB)max server memory (mb)
% (Total) 3840 1440 37 4096 1632 39 5792 2912 50 8192 4704 57 11584 7248 62 16384 10848 66 23168 16800 72 32768 25200 76 46336 37072 80 65568 53888 82 92704 77648 83 131136 111248 84 185440 158784 85 262272 226000 86 370880 321056 86 524544 455488 86 741792 645600 87
To monitor the memory usage for your instance, use the following metrics:
database/memory/usage
database/sqlserver/memory/buffer_cache_hit_ratio
database/sqlserver/memory/memory_grants_pending
database/sqlserver/memory/page_life_expectancy
For more information, see Monitor Cloud SQL instances.
Database settings to modifyFor optimal performance of the SQL Server database, set the following SQL Server settings as suggested below.
Setting Recommendationcost threshold for parallelism
This is the threshold at which the SQL optimizer executes a query using parallelism. The default value of 5
can cause too many queries to run in parallel, thereby increasing database wait time on parallel threads. To reduce this type of contention, increase the value.
The value is ignored when maxdop
is set to 1
.
max degree of parallelism (MAXDOP)
To reduce database waits due to parallelism, adjust this value based on specific recommendations about the number of logical processors available. Measure performance carefully if setting this option to 1.
optimize for ad hoc workloads
Avoid having a large number of single-use plans in the plan cache. To improve the efficiency of the plan cache for workloads that contain many single use ad hoc batches, set this option to 1
.
tempdb
Pre-size tempdb
so that it does not need to autogrow. All files in tempdb
should be equally sized and have the same file growth set.
The database wait type for tempdb
contention appears as PAGELATCH_UP
. To reduce the contention, add more files.
If the number of processors is less than or equal to 8, use the same number of files as logical processors. If the number of processors is greater than 8, use 8 data files. If contention continues, increase the number of files by multiples of 4 until there is no further contention.
Depending on your workload, you might want to modify the following settings as well.
Setting RecommendationClose Cursor on Commit Enabled
The default value is off
, which means that cursors are not closed automatically when you commit a transaction. Default Cursor
This option controls the scope of a cursor used in T-SQL code. If you change this setting, evaluate the application code for any adverse effects. Page Verify
This option allows SQL Server to calculate a checksum for a database page before it is written to disk and store the checksum in the page header. When a page is read again, the checksum is recomputed to verify the integrity of the page. The recommended value is checksum
. Parameterization
The default value is simple
. Simple parameterization allows SQL Server to replace literal values in a query with parameters. Microsoft provides guidelines about how to change this value and use it with plan guides. Database settings to retain
For optimal performance of the SQL Server database, retain the default values of the following SQL Server settings.
Setting Default value to be retainedAuto Close
False
. This setting, when on, opens and closes connections and flushes the procedure after each connect. This can cause performance degradation in databases that are frequently accessed. Auto Shrink
False
. Turning it on can lead to database and index fragmentation and other performance issues, some of which are discussed in this SQL Server blog. Date Correlation Optimization Enabled
False
. Enabling it can let the optimizer find and optimize relationships between dates across two related tables. Tracking this in SQL Server comes with some performance overhead. Legacy Cardinality Estimation
False
. In some cases, SQL Server cannot accurately calculate cardinalities when this setting is enabled. Parameter Sniffing
ON
. Parameter sniffing from database tables can help create execution plans for reuse. If the tables have unevenly distributed data, the resultant execution plans might lead to performance issues. With such data, use other options from the Query Store rather than modify this setting. Query Optimizer Fixes
False
. When enabled, it can affect the performance of the SQL Server cardinality estimator. If you choose to enable it, test to ensure that there is no query regression. Auto Create Statistics
True
. This option allows SQL Server to create single-column statistics that can improve cardinality estimates for query plans. Auto Update Statistics
True
. This option allows SQL Server to update out-of-date statistics using a re-compilation threshold that is based on table cardinality. Auto Update Statistics Asynchronously
False
. This option, when enabled, directs the SQL query optimizer to use the stale statistics for the current query execution, while updating the statistics asynchronously to benefit future workloads.
However, if you expect predictable response time for a frequently executed query or if your application frequently experiences client request timeouts while awaiting statistics updates, consider enabling this option and disabling Auto Update Statistics
.
Target Recovery Time (Seconds)
60
. This setting establishes an upper bound on the recovery time for a database by flushing dirty pages more or less frequently to the disk from the buffer pool. For highly transactional workloads, a lower value for this setting, combined with the storage IOPS near the maximum value, can contribute to a performance bottleneck. Trace flag settings
Trace flags in SQL Server are used to set certain characteristics, alter the behavior of SQL Server databases, or debug issues in SQL Server.
Some SQL Server trace flags are supported in Cloud SQL and can be set using database flags. The recommended settings are as follows.
Trace flag Recommended1204
Yes
, except for workload-intensive servers that generate a lot of deadlocks.
Returns the resources and types of locks participating in a deadlock and also the command currently affected.
1222
Yes
, except for workload-intensive servers that generate a lot of deadlocks. 1224
No
. This can result in more memory usage and cause memory pressure on the database. 2528
No
. Parallel checking of objects is the default and is recommended. The degree of parallelism is automatically calculated by the database engine. 3205
No
. Tape drives for backups is a feature of Cloud SQL for SQL Server. 3226
No
, unless you need frequent backups, such as TLOG backups. 3625
No
. Because the root account does not have system administrator access, it might not be able to see all error messages. 4199
No
. This affects the cardinality estimator and can lead to query regression. 4616
No
. This restriction lowers the security around application roles. It needs to be validated based on application requirements. 7806
Yes
. If the database server becomes unresponsive, the dedicated admin connection (DAC) might be the only way to make a connection for diagnostics.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-14 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4