Stay organized with collections Save and categorize content based on your preferences.
This page describes exporting and importing files into Cloud SQL instances in parallel.
Note: If you're migrating an entire database from a supported database server (on-premises, in AWS, or Cloud SQL) to a new Cloud SQL instance, then use Database Migration Service instead of exporting and importing files in parallel. If you're exporting because you want to create a new instance from the exported file, consider restoring from a backup to a different instance or cloning the instance.
You can verify that the import or export operation for multiple files in parallel completed successfully by checking the operation's status. You can also cancel the import of data into Cloud SQL instances and the export of data from the instances. For more information about cancelling an import or export operation, see Cancel the import and export of data.
Before you beginBefore you begin an export or import operation:
Export and import operations use database resources, but they don't interfere with typical database operations unless the instance is under-provisioned.
Important: Before starting a large operation, ensure that at least 25 percent of the disk is free on the instance. Doing so helps prevent issues with aggressive autogrowth, which can adversely affect the availability of the instance.The following sections contain information about exporting data from Cloud SQL for MySQL to multiple files in parallel.
Required roles and permissions for exporting data from Cloud SQL for MySQL to multiple files in parallelTo export data from Cloud SQL into Cloud Storage, the user initiating the export must have one of the following roles:
cloudsql.instances.get
cloudsql.instances.export
Additionally, the service account for the Cloud SQL instance must have one of the following roles:
storage.objectAdmin
Identity and Access Management (IAM) rolestorage.objects.create
storage.objects.list
(for exporting files in parallel only)storage.objects.delete
(for exporting files in parallel only)For help with IAM roles, see Identity and Access Management.
Note: The changes that you make to the IAM permissions and roles might take a few minutes to take effect. For more information, see Access change propagation. Export data to multiple files in parallelYou can export data in parallel from multiple files that reside in Cloud SQL to Cloud Storage. To do this, use the
dumpInstance
utility.
After the files are in Cloud Storage, you can import them into another Cloud SQL database. If you want to access the data in the files locally, then download the data from Cloud Storage into your local environment.
If your files contain DEFINER clauses (views, triggers, stored_procedures, and so on), then depending on the order these statements are run, using these files for import can fail. Learn more about DEFINER usage and potential workarounds in Cloud SQL.
gcloudTo export data from Cloud SQL to multiple files in parallel, complete the following steps:
Note: You don't have to create a folder in the bucket. If the folder doesn't exist, then Cloud SQL creates it for you as a part of the process of exporting multiple files in parallel. However, if the folder exists, then it must be empty or the export operation fails.
gcloud sql instances describe
command.
gcloud sql instances describe INSTANCE_NAME
Replace INSTANCE_NAME with the name of your Cloud SQL instance.
In the output, look for the value that's associated with the serviceAccountEmailAddress
field.
storage.objectAdmin
IAM role to the service account, use the gcloud storage buckets add-iam-policy-binding
command. For help with setting IAM permissions, see Use IAM permissions.gcloud sql export sql
command:
gcloud sql export sql INSTANCE_NAME gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME \ --offload \ --parallel \ --threads=THREAD_NUMBER \ --database=DATABASE_NAME \ --table=TABLE_EXPRESSION
Make the following replacements:
3
as the value for this parameter.offload
parameter. If you want to export multiple files in parallel, then use the parallel
parameter. Otherwise, remove these parameters from the command.
The export sql
command doesn't contain triggers or stored procedures, but does contain views. To export triggers or stored procedures, use a single thread for the export. This thread uses the mysqldump
tool.
After the export completes, you should have files in a folder in the Cloud Storage bucket in the MySQL Shell dump format.
To export data from Cloud SQL to multiple files in parallel, complete the following steps:
gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAMEMake the following replacements:
my-bucket
.us-east1
.Note: You don't have to create a folder in the bucket. If the folder doesn't exist, then Cloud SQL creates it for you as a part of the process of exporting multiple files in parallel. However, if the folder exists, then it must be empty or the export operation fails.
legacyBucketWriter
IAM role for your bucket. For help with setting IAM permissions, see Use IAM permissions.Export data from Cloud SQL to multiple files in parallel:
Before using any of the request data, make the following replacements:
3
as the value for this parameter.offload
parameter enables you to use serverless exports for up to 2 threads. The parallel
parameter enables you to export multiple files in parallel. To use these features, set the values of these parameters to TRUE. Otherwise, set their values to FALSE.
HTTP method and URL:
POST https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/export
Request JSON body:
{ "exportContext": { "fileType": "SQL", "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME", "databases": ["DATABASE_NAME"], "offload": [TRUE|FALSE], "sqlExportOptions": { "parallel": [TRUE|FALSE], "threads": [THREAD_NUMBER] } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X POST \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/export"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/export" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_NAME", "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/operations/OPERATION_ID", "targetProject": "PROJECT_NAME" }
After the export completes, you should have files in a folder in the Cloud Storage bucket in the MySQL Shell dump format.
To export data from Cloud SQL to multiple files in parallel, complete the following steps:
gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAMEMake the following replacements:
my-bucket
.us-east1
.Note: You don't have to create a folder in the bucket. If the folder doesn't exist, then Cloud SQL creates it for you as a part of the process of exporting multiple files in parallel. However, if the folder exists, then it must be empty or the export operation fails.
storage.objectAdmin
IAM role for your bucket. For help with setting IAM permissions, see Use IAM permissions.Export data from Cloud SQL to multiple files in parallel:
Before using any of the request data, make the following replacements:
3
as the value for this parameter.offload
parameter enables you to use serverless exports for up to 2 threads. The parallel
parameter enables you to export multiple files in parallel. To use these features, set the values of these parameters to TRUE. Otherwise, set their values to FALSE.
HTTP method and URL:
POST https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/export
Request JSON body:
{ "exportContext": { "fileType": "SQL", "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME", "databases": ["DATABASE_NAME"], "offload": [TRUE|FALSE], "sqlExportOptions": { "parallel": [TRUE|FALSE], "threads": [THREAD_NUMBER] } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X POST \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/export"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/export" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_NAME", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/operations/OPERATION_ID", "targetProject": "PROJECT_NAME" }
After the export completes, you should have files in a folder in the Cloud Storage bucket in the MySQL Shell dump format.
The following sections contain information about importing data from multiple files in parallel to Cloud SQL for MySQL.
Required roles and permissions for importing data from multiple files in parallel to Cloud SQL for MySQLTo import data from Cloud Storage into Cloud SQL, the user initiating the import must have one of the following roles:
cloudsql.instances.get
cloudsql.instances.import
Additionally, the service account for the Cloud SQL instance must have one of the following roles:
storage.objectAdmin
IAM rolestorage.objects.get
storage.objects.list
(for importing files in parallel only)For help with IAM roles, see Identity and Access Management.
Note: The changes that you make to the IAM permissions and roles might take a few minutes to take effect. For more information, see Access change propagation. Import data to Cloud SQL for MySQLYou can import data in parallel from multiple files that reside in Cloud Storage to your database. To do this, use the loadDump
utility.
To import data from multiple files in parallel into Cloud SQL, complete the following steps:
Upload the files to your bucket.
Note: Make sure that the files that you're uploading are in the MySQL Shell dump format. For more information, see Export data from multiple files in parallel.
For help with uploading files to buckets, see Upload objects from files.
gcloud sql instances describe
command.
gcloud sql instances describe INSTANCE_NAME
Replace INSTANCE_NAME with the name of your Cloud SQL instance.
In the output, look for the value that's associated with the serviceAccountEmailAddress
field.
storage.objectAdmin
IAM role to the service account, use the gcloud storage buckets add-iam-policy-binding
utility. For help with setting IAM permissions, see Use IAM permissions.gcloud sql import sql
command:
gcloud sql import sql INSTANCE_NAME gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME \ --parallel \ --threads=THREAD_NUMBER \ --database=DATABASE_NAME
Make the following replacements:
3
as the value for this parameter.Note: If you want to import multiple files in parallel, then use the parallel
parameter.
Otherwise, remove these parameters from the command.
If the command returns an error like ERROR_RDBMS
, then review the permissions; this error is often due to permissions issues.
gcloud storage buckets remove-iam-policy-binding
to remove them.To import data from multiple files in parallel into Cloud SQL, complete the following steps:
gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAMEMake the following replacements:
my-bucket
.us-east1
.Upload the files to your bucket.
Note: Make sure that the files that you're uploading are in the MySQL Shell dump format. For more information, see Export data from multiple files in parallel.
For help with uploading files to buckets, see Upload objects from files.
storage.objectAdmin
IAM role for your bucket. For help with setting IAM permissions, see Use IAM permissions.Import data from multiple files in parallel into Cloud SQL:
Before using any of the request data, make the following replacements:
3
as the value for this parameter.Note: The offload
parameter enables you to use serverless imports for up to 2 threads. The parallel
parameter enables you to import multiple files in parallel.
To use these features, set the values of these parameters to TRUE. Otherwise, set their values to FALSE.
HTTP method and URL:
POST https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/import
Request JSON body:
{ "importContext": { "fileType": "SQL", "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME", "databases": ["DATABASE_NAME"], "offload": [TRUE|FALSE], "sqlImportOptions": { "parallel": [TRUE|FALSE], "threads": [THREAD_NUMBER] } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X POST \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/import"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/import" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_NAME", "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/operations/OPERATION_ID", "targetProject": "PROJECT_NAME" }For the complete list of parameters for the request, see the Cloud SQL Admin API page.
gcloud storage buckets remove-iam-policy-binding
to remove them.To import data from multiple files in parallel into Cloud SQL, complete the following steps:
gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAMEMake the following replacements:
my-bucket
.us-east1
.Upload the files to your bucket.
Note: Make sure that the files that you're uploading are in the MySQL Shell dump format. For more information, see Export data from multiple files in parallel.
For help with uploading files to buckets, see Upload objects from files.
storage.objectAdmin
IAM role for your bucket. For help with setting IAM permissions, see Use IAM permissions.Import data from multiple files in parallel into Cloud SQL:
Before using any of the request data, make the following replacements:
3
as the value for this parameter.Note: The offload
parameter enables you to use serverless imports for up to 2 threads. The parallel
parameter enables you to import multiple files in parallel.
To use these features, set the values of these parameters to TRUE. Otherwise, set their values to FALSE.
HTTP method and URL:
POST https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/import
Request JSON body:
{ "importContext": { "fileType": "SQL", "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME", "databases": ["DATABASE_NAME"], "offload": [TRUE|FALSE], "sqlImportOptions": { "parallel": [TRUE|FALSE], "threads": [THREAD_NUMBER] } } }
To send your request, expand one of these options:
curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X POST \PowerShell (Windows) Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/import"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/import" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Response{ "kind": "sql#operation", "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME", "status": "PENDING", "user": "user@example.com", "insertTime": "2020-01-21T22:43:37.981Z", "operationType": "UPDATE", "name": "OPERATION_ID", "targetId": "INSTANCE_NAME", "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/operations/OPERATION_ID", "targetProject": "PROJECT_NAME" }For the complete list of parameters for the request, see the Cloud SQL Admin API page.
gcloud storage buckets remove-iam-policy-binding
to remove them.If you specify too many threads when you import or export data from multiple files in parallel, then you might use more memory than your Cloud SQL instance has. If this occurs, then an internal error message appears. Check the memory usage of your instance and increase the instance's size, as needed. For more information, see About instance settings.
When performing an export, commas in database names or table names in the databases
or tables
fields aren't supported.
Make sure that you have enough disk space for the initial dump file download. Otherwise, a no space left on disk
error appears.
If your instance has only one virtual CPU (vCPU), then you can't import or export multiple files in parallel. The number of vCPUs for your instance can't be smaller than the number of threads that you're using for the import or export operation, and the number of threads must be at least two.
Multi-threaded (parallel) imports and exports aren't compatible with single-threaded imports and exports. For example, dump files generated by a single-threaded export can only be imported by single-threaded imports. Similarly, dump files generated by parallel exports can only be imported by parallel imports.
If you write data definition language (DDL) statements such as CREATE
, DROP
, or ALTER
during an export operation, then the operation might fail or the exported data might be inconsistent with the point-in-time recovery snapshot.
If an import operation fails, then you might have partially imported data remaining. MySQL commits DDL statements automatically. If this occurs, then before you import the data again, clean up the DDL statements and the data.
Similar to a single-database parallel import operation, before running a parallel import operation for an entire instance, make sure all databases have finished creation before you start.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-14 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4