Stay organized with collections Save and categorize content based on your preferences.
Database Migration Service uses migration jobs to migrate data from your source database instance to the destination Cloud SQL database instance.
Creating a migration job includes:
Database Migration Service wizard helps you create a migration job. The wizard consists of the following panes: Get started, Define a source, Define a destination, Define connectivity method, Configure migration databases, and Test and create migration job. Information on how to populate each pane is provided in the various sections of this page.
You can pause the creation of a migration job by clicking Save and exit. All of the data that was populated until that point is saved in a draft migration job, and you can access this job on the Drafts tab.
To finish creating a migration job, navigate to the tab, and then click the job. The creation flow resumes from where it was left off. The job remains a draft until you click Create or Create and start.
Define settings for the migration jobProvide a name for the migration job.
Choose a friendly name that helps you identify the migration job. Don't include sensitive or personally identifiable information in the job name.
Keep the auto-generated Migration job ID.
Select the source database engine.
Select the destination database engine.
Select the destination region for your migration. This is where the Database Migration Service instance is created, and should be selected based on the location of the services that need your data, such as Compute Engine instances and App Engine apps, and other services. After you choose the destination region, this selection can't be changed.
Important: If you plan to use the Cloud SQL for PostgreSQL Enterprise Plus edition, make sure your region is supported for that edition. See Cloud SQL for PostgreSQL Enterprise Plus edition region support.Specify the migration job type: One-time (snapshot only) or Continuous (snapshot + ongoing changes).
Review the required prerequisites that are generated automatically to reflect how the environment must be prepared for a migration job. These prerequisites can include how to configure the source database and how to connect it to the destination Cloud SQL database instance. It's best to complete these prerequisites at this step, but you can complete them at any time before you test the migration job or start it. For more information about these prerequisites, see Configure your source.
Click Save and continue.
If you have created a connection profile, then select it from the list of existing connection profiles.
If you haven't created a connection profile, then create one by clicking Create a connection profile at the bottom of the drop-down list, and then perform the same steps as in Create a source connection profile.
The speed of data dump parallelism is related to the amount of load on your source database. You can use the following settings:
If you want to use adjusted data dump parallelism settings, make sure to increase the max_replication_slots
, max_wal_senders
, and max_worker_processes
parameters on your source database. You can verify your configuration by running the migration job test at the end of migration job creation.
You can also migrate to an existing instance, see Migration job for an existing instance.
Provide an alphanumeric password for the destination Cloud SQL instance. This will be the password for the postgres
administrator account in the instance.
You can either enter the password manually or click Generate to have Database Migration Service create one for you automatically.
Cloud SQL for PostgreSQL editions come with different sets of features, available machine types, and pricing. Make sure you consult the Cloud SQL documentation to choose the edition that is appropriate for your needs. For more information, see Introduction to Cloud SQL for PostgreSQL editions.
The instance is created in the region that you selected when you defined settings for the migration job. Select a zone within that region or leave the zone set to Any for Google to select one automatically.
If you are configuring your instance for high availability, select Multiple zones (Highly available). You can select both the primary and the secondary zone. The following conditions apply when the secondary zone is used during instance creation:
servicenetworking.services.addPeering
IAM permission.compute.networkAdmin
IAM role.If you plan to connect with IP allowlisting, select Public IP.
Optionally, click Authorized networks, and either authorize a network or a proxy to connect to the Cloud SQL instance. Cloud SQL instances accept connections only from addresses that are authorized. For more information about configuring public access to the instance, see Configure public IP.
Learn more about PostgreSQL machine types.
Data cache is an optional feature available for Cloud SQL for PostgreSQL Enterprise Plus edition instances that adds a high-speed local solid state drive to your destination database. This feature can introduce additional costs to your Cloud SQL. For more information on data cache, see Data cache overview in Cloud SQL documentation.
Specify whether you want to manage the encryption of the data that's migrated from the source to the destination. By default, your data is encrypted with a key that's managed by Google Cloud. If you want to manage your encryption, then you can use a customer-managed encryption key (CMEK). To do so:
If you don't see your key, then click ENTER KEY RESOURCE NAME to provide the resource name of the key that you want to use. For example, you can enter projects/my-project-name/locations/my-location/keyRings/my-keyring/cryptoKeys/my-key
in the Key resource name field, and then click SAVE.
As part of creating the migration job, Database Migration Service will verify that the CMEK exists, and that Database Migration Service has permissions to use the key.
If Database Migration Service doesn't have these permissions, then information will appear, specifying that the Database Migration Service service account can't use the CMEK. Click GRANT to give Database Migration Service permissions to use the key.
For more information about creating a CMEK, see Using customer-managed encryption keys (CMEK).
Labels help organize your instances. For example, you can organize labels by cost center or environment. Labels are also included in your bill so you can see the distribution of costs across your labels.
Click CREATE & CONTINUE.
Creating read replicas and enabling high availability is available only after you promote the migration.From the Connectivity method drop-down menu, select a network connectivity method. This method defines how the newly created Cloud SQL instance will connect to the source database. Current network connectivity methods include IP allowlist, reverse SSH tunnel, and VPC peering.
If you select the reverse SSH tunnel network connectivity method, then select the Compute Engine VM instance that will host the tunnel.
After specifying the instance, Google will provide a script that performs the steps to set up the tunnel between the source and destination databases. You'll need to run the script in the Google Cloud CLI.
Run the commands from a machine that has connectivity to both the source database and to Google Cloud.
After selecting the network connectivity method and providing any additional information for the method, click CONFIGURE & CONTINUE.
You can select the databases that you want to migrate.
If you want to migrate specific databases, you can filter the list that appears and select the databases that you want Database Migration Service to migrate into a destination.
If the list doesn't appear and a database discovery error is displayed, click Reload. If database discovery fails, the job migrates all databases. You can continue with creating a migration job and fix connectivity errors later.
On this final step, review the summary of the migration job settings, source, destination, and connectivity method, and then test the validity of the migration job setup. If any issues are encountered, then you can modify the migration job's settings. Not all settings are editable.
Click TEST JOB to verify that:
The migration job is valid, and the source and destination versions are compatible.
If the test fails, then you can address the problem in the appropriate part of the flow, and return to re-test.
For more information about reasons why the test may fail and how to troubleshoot any issues associated with the test failing, see Diagnose issues for PostgreSQL.The migration job can be created even if the test fails, but after the job is started, it may fail at some point during the run.
Click CREATE & START JOB to create the migration job and start it immediately, or click CREATE JOB to create the migration job without immediately starting it.
If the job isn't started at the time that it's created, then it can be started from the Migration jobs page by clicking START.
Regardless of when the migration job starts, your organization is charged for the existence of the destination instance.
Key Point: When you start the migration job, Database Migration Service begins the full dump, briefly locking the source database. If your source is in Amazon RDS or Amazon Aurora, Database Migration Service additionally requires a short (approximately under a minute) write downtime at the start of the migration.For more information, see Known limitations.
The migration job is added to the migration jobs list and can be viewed directly.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-07-09 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-07-09 UTC."],[[["Database Migration Service facilitates data migration from a source database to a destination Cloud SQL database instance using migration jobs, encompassing settings definition, connection profile specification, destination instance configuration, connectivity setup, and job testing."],["Creating a migration job involves using a wizard with panes for getting started, defining the source and destination, setting the connectivity method, configuring migration databases, and testing and creating the migration job, which can be saved as a draft at any stage."],["When setting up a migration job, users must specify the job's name, source and destination database engines, destination region, migration type (one-time or continuous), source connection profile, and customize data dump configurations."],["Defining the destination Cloud SQL instance involves choosing between creating a new instance or using an existing one, setting instance details such as ID, password, database version, edition, region, IP configuration, machine type, and storage capacity."],["Configuring connectivity between the source and destination databases requires selecting a method (IP allowlist, reverse SSH tunnel, or VPC peering) and providing necessary information, followed by specifying which databases to migrate, and finally testing the job setup before creation and optional immediate start."]]],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4