Stay organized with collections Save and categorize content based on your preferences.
This page describes how to use point-in-time recovery (PITR) to restore your primary Cloud SQL instance.
To learn more about PITR, see Point-in-time recovery (PITR).
Note: This page contains features related to Cloud SQL editions. For more information about Cloud SQL editions, see Introduction to Cloud SQL editions.If you create a Cloud SQL Enterprise Plus edition instance, then PITR is enabled by default, regardless of the method used for creation. If you want to disable the feature, then you must do so manually.
If you create a Cloud SQL Enterprise edition instance, then PITR is disabled by default, regardless of the method used for creation. In this case, if you want to enable the feature, then you must do so manually.
Log storage for PITROn May 31, 2024, we launched storing transaction logs for PITR in
Cloud Storage. Since this launch, the following conditions apply:
PITR-enabled Cloud SQL instances created before this date used to store PITR transaction logs on disk. PITR transaction logs for most of these instances have since been migrated to Cloud Storage. To verify the log location for a specific instance, see Check the storage location of transaction logs used for PITR.
Note: If your Cloud SQL for SQL Server instance is on the old network architecture, the transaction logs for PITR might still remain on disk until the logs are migrated to the new network architecture.All Cloud SQL instances created with PITR enabled on or after this date store these logs in Cloud Storage.
Transaction logs update regularly and use storage space. Cloud SQL deletes transaction logs with their associated automatic backups automatically. This happens after the value set for thetransactionLogRetentionDays
parameter is met. For more information about this parameter, see Logs and disk usage.
For instances that store transaction logs only on disk, you can configure Cloud SQL to store the logs on Cloud Storage by first deactivating and then re-enabling PITR. You can't move logs from Cloud Storage back to disk.
To ensure that logs for your instance are stored in Cloud Storage instead of on disk, complete the following actions:
For instances having transaction logs stored in Cloud Storage, the logs are stored in the same region as the primary instance. This log storage (up to 35 days for Cloud SQL Enterprise Plus edition and seven days for Cloud SQL Enterprise edition, the maximum length for PITR) generates no additional cost per instance.
Cloud SQL generates transaction logs regularly and these logs use storage space. Cloud SQL deletes the transaction logs automatically with their associated automatic backups. This happens after the value that you set for the transactionLogRetentionDays
parameter is met. This parameter specifies the number of days that Cloud SQL retains transaction logs. For Cloud SQL Enterprise Plus edition, you can set the number of days of retained transaction logs from 1 to 35, and for Cloud SQL Enterprise edition, you can set this value from 1 to 7.
If a value for this parameter isn't set, then the default transaction log retention period of 14 days is set for Cloud SQL Enterprise Plus edition instances and 7 days for Cloud SQL Enterprise edition instances. For more information on how to apply this setting, see Set transaction log retention.
To find out how much disk the transaction logs use, check the bytes_used_by_data_type
metric for the instance. The value for the data type returns the size of the transaction logs on the disk. For instances that store transaction logs used for PITR on disk, Cloud SQL purges data from the disk daily to meet the transactionLogRetentionDays
PITR setting. For more information, see Automated backup retention.
The following limitations are associated with your instance having PITR enabled and the size of your transaction logs on disk causing an issue for your instance:
Logs are purged once daily, not continuously. Setting the log retention to two days means that at least two days of logs, and at most three days of logs, are retained. We recommend setting the number of backups to one more than the days of log retention.
For example, if you specify 7
for the value of the transactionLogRetentionDays
parameter, then for the backupRetentionSettings
parameter, set the number of retainedBackups
to 8
.
For more information about PITR, see Point-in-time recovery (PITR).
Database recovery model for PITRIf you enable PITR on an instance, Cloud SQL automatically sets the recovery model of the existing and subsequent databases to the full recovery model.
Caution: After enabling PITR on your instance, don't change the database recovery model to Simple. This can break the transaction log backup chain, and you might not be able to recover the instance data to the point in time when the recovery model was set to Simple.For more information about SQL Server recovery models, see the Microsoft documentation.
Enable PITRWhen you create a new instance in the Google Cloud console, the
Automated backupssetting is automatically enabled.
The following procedure enables PITR on an existing primary instance.
ConsoleIn the Google Cloud console, go to the Cloud SQL Instances page.
gcloud sql instances describe INSTANCE_NAME
enabled: false
in the backupConfiguration
section, enable scheduled backups:
gcloud sql instances patch INSTANCE_NAME \ --backup-start-time=HH:MM
Specify the backup-start-time
parameter using 24-hour time in UTC±00 time zone.
gcloud sql instances patch INSTANCE_NAME \ --enable-point-in-time-recovery
If you're enabling PITR on a primary instance, you can also configure the number of days for which you want to retain transaction logs by adding the following parameter:
--retained-transaction-log-days=RETAINED_TRANSACTION_LOG_DAYS
gcloud sql instances describe INSTANCE_NAME
In the backupConfiguration
section, you see pointInTimeRecoveryEnabled: true
if the change was successful.
To enable PITR, use a Terraform resource.
Enable PITR for Cloud SQL Enterprise Plus edition Note: If you create a Cloud SQL Enterprise Plus edition instance, then PITR is enabled by default, regardless of the method used for creation. If you want to disable the feature, then you must do so manually. Use the following Terraform code sample to create a Cloud SQL Enterprise Plus edition instance with PITR enabled: Enable PITR for Cloud SQL Enterprise edition Note: If you create a Cloud SQL Enterprise edition instance, then PITR is disabled by default, regardless of the method used for creation. In this case, if you want to enable the feature, you must do so manually. Use the following Terraform code sample to create a Cloud SQL Enterprise edition instance with PITR enabled: Apply the changesTo apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.
Prepare Cloud ShellSet the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform configuration file.
Each Terraform configuration file must have its own directory (also called a root module).
.tf
extension—for example main.tf
. In this tutorial, the file is referred to as main.tf
.
mkdir DIRECTORY && cd DIRECTORY && touch main.tf
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly created main.tf
.
Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
terraform init
Optionally, to use the latest Google provider version, include the -upgrade
option:
terraform init -upgrade
terraform plan
Make corrections to the configuration as necessary.
yes
at the prompt:
terraform apply
Wait until Terraform displays the "Apply complete!" message.
To delete your changes, do the following:
deletion_protection
argument to false
.
deletion_protection = "false"
yes
at the prompt:
terraform apply
Remove resources previously applied with your Terraform configuration by running the following command and entering yes
at the prompt:
terraform destroy
Before using any of the request data, make the following replacements:
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME
Request JSON body:
{ "settings": { "backupConfiguration": { "startTime": "START_TIME", "enabled": true, "pointInTimeRecoveryEnabled": true } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X PATCH \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_NAME", "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID" }REST v1beta4
Before using any of the request data, make the following replacements:
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME
Request JSON body:
{ "settings": { "backupConfiguration": { "startTime": "START_TIME", "enabled": true, "pointInTimeRecoveryEnabled": true } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X PATCH \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_NAME", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID" }Get the latest recovery time
For an available instance, you can perform PITR to the latest time. If the instance is unavailable and the instance logs are stored in Cloud Storage, then you can retrieve the latest recovery time and perform the PITR to that time. In both cases, you can restore the instance to a different primary or secondary zone by providing values for the preferred zones.
gcloudGet the latest time to which you can recover a Cloud SQL instance that's not available.
Replace INSTANCE_NAME with the name of the instance that you're querying.
gcloud sql instances get-latest-recovery-time INSTANCE_NAMEREST v1
Before using any of the request data, make the following replacements:
HTTP method and URL:
GET https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Execute the following command:
curl -X GET \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "kind": "sql#getLatestRecoveryTime", "latestRecoveryTime": "2023-06-20T17:23:59.648821586Z" }REST v1beta4
Before using any of the request data, make the following replacements:
HTTP method and URL:
GET https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Execute the following command:
curl -X GET \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_NAME/getLatestRecoveryTime" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "kind": "sql#getLatestRecoveryTime", "latestRecoveryTime": "2023-06-20T17:23:59.648821586Z" }Perform PITR Console
In the Google Cloud console, go to the Cloud SQL Instances page.
Create a clone using PITR.
Replace the following:
gcloud sql instances clone SOURCE_INSTANCE_NAME \ NEW_INSTANCE_NAME \ --point-in-time 'TIMESTAMP'Note: By default, point-in-time recovery is for all databases. Optionally, you can specify up to one specific database name as follows:
--database_names 'DATABASE'
REST v1
Before using any of the request data, make the following replacements:
In the JSON request, you optionally can specify up to one specific database name as the following: "databaseNames": "my-database"
HTTP method and URL:
POST https://sqladmin.googleapis.com/v1/projects/project-id/instances/source-instance-id/clone
Request JSON body:
{ "cloneContext": { "kind": "sql#cloneContext", "destinationInstanceName": "target-instance-id", "pointInTime": "restore-timestamp" } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X POST \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/project-id/instances/source-instance-id/clone"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/project-id/instances/source-instance-id/clone" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/project-id/instances/target-instance-id", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "CREATE", "name": "operation-id", "targetId": "target-instance-id", "selfLink": "https://sqladmin.googleapis.com/v1/projects/project-id/operations/operation-id", "targetProject": "project-id" }REST v1beta4
Before using any of the request data, make the following replacements:
In the JSON request, you optionally can specify up to one specific database name as the following: "databaseNames": "my-database"
HTTP method and URL:
POST https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/source-instance-id/clone
Request JSON body:
{ "cloneContext": { "kind": "sql#cloneContext", "destinationInstanceName": "target-instance-id", "pointInTime": "restore-timestamp" } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X POST \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/source-instance-id/clone"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/source-instance-id/clone" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/target-instance-id", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "CREATE", "name": "operation-id", "targetId": "target-instance-id", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/operations/operation-id", "targetProject": "project-id" }Perform PITR using the backup vault
Preview — Enhanced backups
This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms. You can process personal data for this feature as outlined in the Cloud Data Processing Addendum, subject to the obligations and restrictions described in the agreement under which you access Google Cloud. Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions.
If your Cloud SQL instance is enabled to use enhanced backups, then you can perform point-in-time-recovery for your instance using the backup vault.
ConsoleIn the Google Cloud console, go to the Cloud SQL Instances page.
Open the more actions menu for the instance you want to recover and click Create clone.
Select Clone from an earlier point in time.
Enter a PITR time.
Click Create clone.
To perform a PITR on an instance from the backup vault, you'll need to find the data-source
for the backup that is nearest to the time you want to perform PITR. To find the backup, see List all the backups in the backup vault for an instance. Once you've identified the backup, run the following command to perform PITR:
gcloud sql instances point-in-time-restore DATA_SOURCE
PITR_TIMESTAMP
--project=TARGET_PROJECT
Replace the following:
data-source
for the backup that is closest to the PITR timestamp you want to recover to.In the Google Cloud console, go to the Cloud SQL Instances page.
gcloud sql instances patch INSTANCE_NAME \ --no-enable-point-in-time-recovery
gcloud sql instances describe INSTANCE_NAME
In the backupConfiguration
section, you see pointInTimeRecoveryEnabled: false
if the change was successful.
Before using any of the request data, make the following replacements:
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id
Request JSON body:
{ "settings": { "backupConfiguration": { "enabled": false, "pointInTimeRecoveryEnabled": false } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X PATCH \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "operation-id", "targetId": "instance-id", "selfLink": "https://sqladmin.googleapis.com/v1/projects/project-id/operations/operation-id", "targetProject": "project-id" }REST v1beta4
Before using any of the request data, make the following replacements:
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id
Request JSON body:
{ "settings": { "backupConfiguration": { "enabled": false, "pointInTimeRecoveryEnabled": false } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X PATCH \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "operation-id", "targetId": "instance-id", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/operations/operation-id", "targetProject": "project-id" }Check the storage location of transaction logs used for PITR
You can check where your Cloud SQL instance is storing the transaction logs used for PITR.
gcloudTo determine whether your instance stores logs for PITR on disk or Cloud Storage, use the following command:
gcloud sql instances describe INSTANCE_NAME
Replace INSTANCE_NAME with the name of the instance.
For multiple instances in the same project, you can also check the storage location of the transaction logs. To determine the location for multiple instances, use the following command:
gcloud sql instances list --show-transactional-log-storage-state
Example response:
NAME DATABASE_VERSION LOCATION TRANSACTIONAL_LOG_STORAGE_STATE my_01 SQLSERVER_2019_STANDARD us-central-1 DISK my_02 SQLSERVER_2019_STANDARD us-central-1 CLOUD_STORAGE ...
In the output of the command, the transactionalLogStorageState
field or the TRANSACTIONAL_LOG_STORAGE_STATE
column provides information about where the transaction logs for PITR are stored for the instance. The possible transaction log storage states are the following:
DISK
: the instance stores the transaction logs used for PITR on disk.CLOUD_STORAGE
: the instance stores the transaction logs used for PITR in Cloud Storage.To set the number of days to retain transaction logs:
ConsoleIn the Google Cloud console, go to the Cloud SQL Instances page.
Edit the instance to set the number of days to retain transaction logs.
Replace the following:
DAYS_TO_RETAIN: The number of days of transaction logs to keep. For Cloud SQL Enterprise Plus edition, the valid range is between 1 and 35 days, with a default of 14 days. For Cloud SQL Enterprise edition, the valid range is between 1 and 7 days, with a default of 7 days.
If you don't specify a value, then Cloud SQL uses the default value. This is valid only when PITR is enabled. Keeping more days of transaction logs requires a bigger storage size.
gcloud sql instances patch INSTANCE_NAMEREST v1
--retained-transaction-log-days=DAYS_TO_RETAIN
Before using any of the request data, make the following replacements:
DAYS_TO_RETAIN: the number of days to retain transaction logs. For Cloud SQL Enterprise Plus edition, the valid range is between 1 and 35 days, with a default of 14 days. For Cloud SQL Enterprise edition, the valid range is between 1 and 7 days, with a default of 7 days.
If no value is specified, then the default value is used. This is valid only when PITR is enabled. Keeping more days of transaction logs requires a bigger storage size.
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID
Request JSON body:
{ "settings": { "backupConfiguration": { "transactionLogRetentionDays": "DAYS_TO_RETAIN" } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X PATCH \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_ID", "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID" }REST v1beta4
Before using any of the request data, make the following replacements:
DAYS_TO_RETAIN: the number of days to retain transaction logs. For Cloud SQL Enterprise Plus edition, the valid range is between 1 and 35 days, with a default of 14 days. For Cloud SQL Enterprise edition, the valid range is between 1 and 7 days, with a default of 7 days.
If no value is specified, then the default value is used. This is valid only when PITR is enabled. Keeping more days of transaction logs requires a bigger storage size.
HTTP method and URL:
PATCH https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID
Request JSON body:
{ "settings": { "backupConfiguration": { "transactionLogRetentionDays": "DAYS_TO_RETAIN" } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X PATCH \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method PATCH `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_ID", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/operations/OPERATION_ID", "targetProject": "PROJECT_ID" }Troubleshoot Issue Troubleshooting
argument --point-in-time: Failed to parse date/time:
Unknown string format: 2021-0928T30:54:03.094;
received: 2021-0928T30:54:03.094Z
OR
Invalid value at 'body.clone_context.point_in_time'
(type.googleapis.com/google.protobuf.Timestamp), Field 'pointInTime',
Invalid time format: Failed to parse input,
HTTP Error 400: Successful backup required for carrying out the operation was not found.
OR
Successful backup required for carrying out the operation was not found. or Time where no backups can be found.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-14 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4