A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://cloud.google.com/sql/docs/postgres/import-export/import-export-parallel below:

Export and import files in parallel | Cloud SQL for PostgreSQL

Skip to main content Export and import files in parallel

Stay organized with collections Save and categorize content based on your preferences.

MySQL   |  PostgreSQL   |  SQL Server

This page describes exporting and importing files into Cloud SQL instances in parallel.

Note: If you're exporting because you want to create a new instance from the exported file, consider restoring from a backup to a different instance or cloning the instance.

You can verify that the import or export operation for multiple files in parallel completed successfully by checking the operation's status. You can also cancel the import of data into Cloud SQL instances and the export of data from the instances. For more information about cancelling an import or export operation, see Cancel the import and export of data.

Before you begin

Before you begin an export or import operation:

Export data from Cloud SQL for PostgreSQL to multiple files in parallel

The following sections contain information about exporting data from Cloud SQL for PostgreSQL to multiple files in parallel.

Required roles and permissions for exporting data from Cloud SQL for PostgreSQL to multiple files in parallel

To export data from Cloud SQL into Cloud Storage, the user initiating the export must have one of the following roles:

Additionally, the service account for the Cloud SQL instance must have one of the following roles:

For help with IAM roles, see Identity and Access Management.

Note: The changes that you make to the IAM permissions and roles might take a few minutes to take effect. For more information, see Access change propagation. Export data to multiple files in parallel

You can export data in parallel from multiple files that reside in Cloud SQL to Cloud Storage. To do this, use the

pg_dump

utility with the

--jobs

option.

If you plan to import your data into Cloud SQL, then follow the instructions provided in Exporting data from an external database server so that your files are formatted correctly for Cloud SQL.

Note: If your data contains large objects (blobs), then the export might consume a large amount of memory, impacting instance performance. For help, see Issues with importing and exporting data. gcloud

To export data from Cloud SQL to multiple files in parallel, complete the following steps:

  1. Create a Cloud Storage bucket.

    Note: You don't have to create a folder in the bucket. If the folder doesn't exist, then Cloud SQL creates it for you as a part of the process of exporting multiple files in parallel. However, if the folder exists, then it must be empty or the export operation fails.

  2. To find the service account for the Cloud SQL instance that you're exporting files from, use the
    gcloud sql instances describe command.
    gcloud sql instances describe INSTANCE_NAME
  3. Replace INSTANCE_NAME with the name of your Cloud SQL instance.

    In the output, look for the value that's associated with the serviceAccountEmailAddress field.

  4. To grant the storage.objectAdmin IAM role to the service account, use the gcloud storage buckets add-iam-policy-binding command. For help with setting IAM permissions, see Use IAM permissions.
  5. To export data from Cloud SQL to multiple files in parallel, use the gcloud sql export sql command:
    gcloud sql export sql INSTANCE_NAME gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME \
    --offload \
    --parallel \
    --threads=THREAD_NUMBER \
    --database=DATABASE_NAME \
    --table=TABLE_EXPRESSION
    

    Make the following replacements:

    Note: If you want to use serverless exports for up to 2 threads, then use the offload parameter. If you want to export multiple files in parallel, then use the parallel parameter. Otherwise, remove these parameters from the command.

    The export sql command doesn't contain triggers or stored procedures, but does contain views. To export triggers or stored procedures, use a single thread for the export. This thread uses the pg_dump tool.

    After the export completes, you should have files in a folder in the Cloud Storage bucket in the pg_dump directory format.

  6. If you don't need the IAM role that you set in Required roles and permissions for exporting from Cloud SQL for PostgreSQL, then revoke it.
REST v1

To export data from Cloud SQL to multiple files in parallel, complete the following steps:

  1. Create a Cloud Storage bucket:
    gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAME
    
    Make the following replacements:

    Note: You don't have to create a folder in the bucket. If the folder doesn't exist, then Cloud SQL creates it for you as a part of the process of exporting multiple files in parallel. However, if the folder exists, then it must be empty or the export operation fails.

  2. Provide your instance with the legacyBucketWriter IAM role for your bucket. For help with setting IAM permissions, see Use IAM permissions.
  3. Export data from Cloud SQL to multiple files in parallel:

    Before using any of the request data, make the following replacements:

    Note: The offload parameter enables you to use serverless exports for up to 2 threads. The parallel parameter enables you to export multiple files in parallel. To use these features, set the values of these parameters to TRUE. Otherwise, set their values to FALSE.

    HTTP method and URL:

    POST https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/export

    Request JSON body:

    {
     "exportContext":
       {
          "fileType": "SQL",
          "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME",
          "databases": ["DATABASE_NAME"],
          "offload": [TRUE|FALSE],
          "sqlExportOptions": {
            "parallel": [TRUE|FALSE],
            "threads": [THREAD_NUMBER]
           }
       }
    }
    

    To send your request, expand one of these options:

    curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to the gcloud CLI with your user account by running gcloud init or gcloud auth login , or by using Cloud Shell, which automatically logs you into the gcloud CLI . You can check the currently active account by running gcloud auth list.

    Save the request body in a file named request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/export"
    PowerShell (Windows) Note: The following command assumes that you have logged in to the gcloud CLI with your user account by running gcloud init or gcloud auth login . You can check the currently active account by running gcloud auth list.

    Save the request body in a file named request.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `


    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/export" | Select-Object -Expand Content

    You should receive a JSON response similar to the following:

    Response
    {
      "kind": "sql#operation",
      "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME",
      "status": "PENDING",
      "user": "user@example.com",
      "insertTime": "2020-01-21T22:43:37.981Z",
      "operationType": "UPDATE",
      "name": "OPERATION_ID",
      "targetId": "INSTANCE_NAME",
      "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/operations/OPERATION_ID",
      "targetProject": "PROJECT_NAME"
    }
    
  4. After the export completes, you should have files in a folder in the Cloud Storage bucket in the pg_dump directory format.

  5. If you don't need the IAM role that you set in Required roles and permissions for exporting from Cloud SQL for PostgreSQL, then revoke it.
For the complete list of parameters for the request, see the Cloud SQL Admin API page. REST v1beta4

To export data from Cloud SQL to multiple files in parallel, complete the following steps:

  1. Create a Cloud Storage bucket:
    gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAME
    Make the following replacements:

    Note: You don't have to create a folder in the bucket. If the folder doesn't exist, then Cloud SQL creates it for you as a part of the process of exporting multiple files in parallel. However, if the folder exists, then it must be empty or the export operation fails.

  2. Provide your instance with the storage.objectAdmin IAM role for your bucket. For help with setting IAM permissions, see Use IAM permissions.
  3. Export data from Cloud SQL to multiple files in parallel:

    Before using any of the request data, make the following replacements:

    Note: The offload parameter enables you to use serverless exports for up to 2 threads. The parallel parameter enables you to export multiple files in parallel. To use these features, set the values of these parameters to TRUE. Otherwise, set their values to FALSE.

    HTTP method and URL:

    POST https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/export

    Request JSON body:

    {
     "exportContext":
       {
          "fileType": "SQL",
          "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME",
          "databases": ["DATABASE_NAME"],
          "offload": [TRUE|FALSE],
          "sqlExportOptions": {
            "parallel": [TRUE|FALSE],
            "threads": [THREAD_NUMBER]
           }
       }
    }
    

    To send your request, expand one of these options:

    curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to the gcloud CLI with your user account by running gcloud init or gcloud auth login , or by using Cloud Shell, which automatically logs you into the gcloud CLI . You can check the currently active account by running gcloud auth list.

    Save the request body in a file named request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/export"
    PowerShell (Windows) Note: The following command assumes that you have logged in to the gcloud CLI with your user account by running gcloud init or gcloud auth login . You can check the currently active account by running gcloud auth list.

    Save the request body in a file named request.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `


    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/export" | Select-Object -Expand Content

    You should receive a JSON response similar to the following:

    Response
    {
      "kind": "sql#operation",
      "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME",
      "status": "PENDING",
      "user": "user@example.com",
      "insertTime": "2020-01-21T22:43:37.981Z",
      "operationType": "UPDATE",
      "name": "OPERATION_ID",
      "targetId": "INSTANCE_NAME",
      "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/operations/OPERATION_ID",
      "targetProject": "PROJECT_NAME"
    }
    
  4. After the export completes, you should have files in a folder in the Cloud Storage bucket in the pg_dump directory format.

  5. If you don't need the IAM role that you set in Required roles and permissions for exporting from Cloud SQL for PostgreSQL, then revoke it.
For the complete list of parameters for the request, see the Cloud SQL Admin API page. Export and import an entire instance

You can export or import all user databases in an entire instance using a directory-formatted, parallel export.

To export or import an entire instance, use the same commands as shown in the parallel export and parallel import sections, removing the databases or database field, respectively. If you don't specify a database, Cloud SQL runs a parallel export or import for all user databases in the instance. This excludes system databases and Cloud SQL databases used to manage internal operations.

In a directory-formatted parallel export to Cloud Storage, after the export successfully completes, the data associated with each database is stored in a subdirectory named for each database, similar to the following:

gs://example-bucket/test-folder/
    |---- postgres/
    |    |---- 3929.dat.gz (table data file)
    |    |---- toc.dat (metadata file)
    |
    |---- second_database/
    |    |---- 3930.dat.gz
    |    |---- 3931.dat.gz
    |    |---- toc.dat

If you want to run a parallel import for an entire instance, and the instance's files were created outside of Cloud SQL, this subdirectory structure is required in order for the operation to successfully complete.

When the entire instance dump structure is detected, the import database specified in the API is ignored. The operation detects the entire instance's structure from the directory name.

You can't run an entire instance export or import for other file formats.

You can't export or import an entire instance as a single SQL file or CSV file.

Import data from multiple files in parallel to Cloud SQL for PostgreSQL

The following sections contain information about importing data from multiple files in parallel to Cloud SQL for PostgreSQL.

Required roles and permissions for importing data from multiple files in parallel to Cloud SQL for PostgreSQL

To import data from Cloud Storage into Cloud SQL, the user initiating the import must have one of the following roles:

Additionally, the service account for the Cloud SQL instance must have one of the following roles:

For help with IAM roles, see Identity and Access Management.

Note: The changes that you make to the IAM permissions and roles might take a few minutes to take effect. For more information, see Access change propagation. Import data to Cloud SQL for PostgreSQL

You can import data in parallel from multiple files that reside in Cloud Storage to your database. To do this, use the pg_restore utility with the --jobs option.

Note: If your data contains large objects (blobs), then the import might consume a large amount of memory, impacting instance performance. For help, see Issues with importing and exporting data. gcloud

To import data from multiple files in parallel into Cloud SQL, complete the following steps:

  1. Create a Cloud Storage bucket.
  2. Upload the files to your bucket.

    Note: Make sure that the files that you're uploading are in the pg_dump directory format. For more information, see Export data from multiple files in parallel.

    For help with uploading files to buckets, see Upload objects from files.

  3. To find the service account for the Cloud SQL instance that you're importing files to, use the
    gcloud sql instances describe command.
    gcloud sql instances describe INSTANCE_NAME
  4. Replace INSTANCE_NAME with the name of your Cloud SQL instance.

    In the output, look for the value that's associated with the serviceAccountEmailAddress field.

  5. To grant the storage.objectAdmin IAM role to the service account, use the gcloud storage buckets add-iam-policy-binding utility. For help with setting IAM permissions, see Use IAM permissions.
  6. To import data from multiple files in parallel into Cloud SQL, use the gcloud sql import sql command:
    gcloud sql import sql INSTANCE_NAME gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME \
    --parallel \ 
    --threads=THREAD_NUMBER \
    --database=DATABASE_NAME
    

    Make the following replacements:

    Note: If you want to import multiple files in parallel, then use the parallel parameter.

    If you use the parallel parameter, and you want to drop (clean) database objects before you recreate them, then use the clean parameter. If you use the parallel parameter, and you want to include the IF EXISTS SQL statement with each DROP statement that's produced by the clean parameter, then use the if-exists parameter.

    Otherwise, remove these parameters from the command.

    If the command returns an error like ERROR_RDBMS, then review the permissions; this error is often due to permissions issues.

  7. If you don't need the IAM permissions that you set in Required roles and permissions for importing to Cloud SQL for PostgreSQL, then use gcloud storage buckets remove-iam-policy-binding to remove them.
REST v1

To import data from multiple files in parallel into Cloud SQL, complete the following steps:

  1. Create a Cloud Storage bucket:
    gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAME
    
    Make the following replacements:
  2. Upload the files to your bucket.

    Note: Make sure that the files that you're uploading are in the pg_dump directory format. For more information, see Export data from multiple files in parallel.

    For help with uploading files to buckets, see Upload objects from files.

  3. Provide your instance with the storage.objectAdmin IAM role for your bucket. For help with setting IAM permissions, see Use IAM permissions.
  4. Import data from multiple files in parallel into Cloud SQL:

    Before using any of the request data, make the following replacements:

    Note: The offload parameter enables you to use serverless imports for up to 2 threads. The parallel parameter enables you to import multiple files in parallel.

    If you use the parallel parameter, then the clean parameter enables you to drop (clean) database objects before you recreate them. If you use the parallel parameter, then the ifExists parameter enables you to include the IF EXISTS SQL statement with each DROP statement that's produced by the clean parameter.

    To use these features, set the values of these parameters to TRUE. Otherwise, set their values to FALSE.

    HTTP method and URL:

    POST https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/import

    Request JSON body:

    {
      "importContext":
        {
           "fileType": "SQL",
           "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME",
           "databases": ["DATABASE_NAME"],
           "offload": [TRUE|FALSE],
           "sqlImportOptions": {
             "parallel": [TRUE|FALSE],
             "clean": [TRUE|FALSE],
             "ifExists": [TRUE|FALSE],
             "threads": [THREAD_NUMBER]
            }
        }
     }
    

    To send your request, expand one of these options:

    curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to the gcloud CLI with your user account by running gcloud init or gcloud auth login , or by using Cloud Shell, which automatically logs you into the gcloud CLI . You can check the currently active account by running gcloud auth list.

    Save the request body in a file named request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/import"
    PowerShell (Windows) Note: The following command assumes that you have logged in to the gcloud CLI with your user account by running gcloud init or gcloud auth login . You can check the currently active account by running gcloud auth list.

    Save the request body in a file named request.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `


    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/INSTANCE_NAME/import" | Select-Object -Expand Content

    You should receive a JSON response similar to the following:

    Response
    {
      "kind": "sql#operation",
      "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME",
      "status": "PENDING",
      "user": "user@example.com",
      "insertTime": "2020-01-21T22:43:37.981Z",
      "operationType": "UPDATE",
      "name": "OPERATION_ID",
      "targetId": "INSTANCE_NAME",
      "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_NAME/operations/OPERATION_ID",
      "targetProject": "PROJECT_NAME"
    }
    

    To use a different user for the import, specify the importContext.importUser property.

    For the complete list of parameters for the request, see the Cloud SQL Admin API page.
  5. If you don't need the IAM permissions that you set in Required roles and permissions for importing to Cloud SQL for PostgreSQL, then use gcloud storage buckets remove-iam-policy-binding to remove them.
REST v1beta4

To import data from multiple files in parallel into Cloud SQL, complete the following steps:

  1. Create a Cloud Storage bucket:
    gcloud storage buckets create gs://BUCKET_NAME --project=PROJECT_NAME --location=LOCATION_NAME
    
    Make the following replacements:
  2. Upload the files to your bucket.

    Note: Make sure that the files that you're uploading are in the pg_dump directory format. For more information, see Export data from multiple files in parallel.

    For help with uploading files to buckets, see Upload objects from files.

  3. Provide your instance with the storage.objectAdmin IAM role for your bucket. For help with setting IAM permissions, see Use IAM permissions.
  4. Import data from multiple files in parallel into Cloud SQL:

    Before using any of the request data, make the following replacements:

    Note: The offload parameter enables you to use serverless imports for up to 2 threads. The parallel parameter enables you to import multiple files in parallel.

    If you use the parallel parameter, then the clean parameter enables you to drop (clean) database objects before you recreate them. If you use the parallel parameter, then the ifExists parameter enables you to include the IF EXISTS SQL statement with each DROP statement that's produced by the clean parameter.

    To use these features, set the values of these parameters to TRUE. Otherwise, set their values to FALSE.

    HTTP method and URL:

    POST https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/import

    Request JSON body:

     {
       "importContext":
         {
            "fileType": "SQL",
            "uri": "gs://BUCKET_NAME/BUCKET_PATH/FOLDER_NAME",
            "databases": ["DATABASE_NAME"],
            "offload": [TRUE|FALSE],
            "sqlImportOptions": {
              "parallel": [TRUE|FALSE],
              "clean": [TRUE|FALSE],
              "ifExists": [TRUE|FALSE],
              "threads": [THREAD_NUMBER]
             }
         }
      }
    

    To send your request, expand one of these options:

    curl (Linux, macOS, or Cloud Shell) Note: The following command assumes that you have logged in to the gcloud CLI with your user account by running gcloud init or gcloud auth login , or by using Cloud Shell, which automatically logs you into the gcloud CLI . You can check the currently active account by running gcloud auth list.

    Save the request body in a file named request.json, and execute the following command:

    curl -X POST \
    -H "Authorization: Bearer $(gcloud auth print-access-token)" \
    -H "Content-Type: application/json; charset=utf-8" \
    -d @request.json \
    "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/import"
    PowerShell (Windows) Note: The following command assumes that you have logged in to the gcloud CLI with your user account by running gcloud init or gcloud auth login . You can check the currently active account by running gcloud auth list.

    Save the request body in a file named request.json, and execute the following command:

    $cred = gcloud auth print-access-token
    $headers = @{ "Authorization" = "Bearer $cred" }

    Invoke-WebRequest `


    -Method POST `
    -Headers $headers `
    -ContentType: "application/json; charset=utf-8" `
    -InFile request.json `
    -Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/INSTANCE_NAME/import" | Select-Object -Expand Content

    You should receive a JSON response similar to the following:

    Response
    {
      "kind": "sql#operation",
      "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/instances/DESTINATION_INSTANCE_NAME",
      "status": "PENDING",
      "user": "user@example.com",
      "insertTime": "2020-01-21T22:43:37.981Z",
      "operationType": "UPDATE",
      "name": "OPERATION_ID",
      "targetId": "INSTANCE_NAME",
      "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_NAME/operations/OPERATION_ID",
      "targetProject": "PROJECT_NAME"
    }
    

    To use a different user for the import, specify the importContext.importUser property.

    For the complete list of parameters for the request, see the Cloud SQL Admin API page.
  5. If you don't need the IAM permissions that you set in Required roles and permissions for importing to Cloud SQL for PostgreSQL, then use gcloud storage buckets remove-iam-policy-binding to remove them.
Limitations What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-07-02 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-07-02 UTC."],[],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4