This page describes how to set up and use a managed import for data when replicating from an external server to Cloud SQL.
You must complete all the steps on this page. When finished, you can administer and monitor the source representation instance the same way as you would any other Cloud SQL instance.
Before you beginBefore you begin, complete these steps:
The replication user on the external server is configured to accept connections from any host (%
). Update this user account so that it can be used only with the Cloud SQL replica.
There are four types of combinations of migrations and dumps.
The privileges for each type of migration and dumb combination are listed below.
Update privilegesTo update privileges, open a terminal on the external server and enter the following commands.
mysql ClientFor GTID:
UPDATE mysql.user SET Host='NEW_HOST' WHERE Host='OLD_HOST' AND User='USERNAME'; GRANT REPLICATION SLAVE, EXECUTE, SELECT, SHOW VIEW, REPLICATION_CLIENT, RELOAD ON . TO 'USERNAME'@'HOST'; FLUSH PRIVILEGES;
For binlog:
UPDATE mysql.user SET Host='NEW_HOST' WHERE Host='OLD_HOST' AND User='USERNAME'; GRANT REPLICATION SLAVE, EXECUTE, SELECT, SHOW VIEW, REPLICATION CLIENT, RELOAD ON . TO 'GCP_USERNAME'@'HOST'; FLUSH PRIVILEGES;example
UPDATE mysql.user
SET Host='192.0.2.0'
WHERE Host='%'
AND User='replicationUser';
GRANT REPLICATION SLAVE, EXECUTE, SELECT, SHOW VIEW, REPLICATION CLIENT,
RELOAD ON *.* TO 'username'@'host.com';
FLUSH PRIVILEGES;
Property Description NEW_HOST Specify the outgoing IP of the Cloud SQL replica. OLD_HOST The current value assigned to Host
that you want to change. USERNAME The replication user account on the external server. GCP_USERNAME The username for the user account. HOST The hostname for the user account. Verify your replication settings
After your setup is complete, ensure that the Cloud SQL replica can replicate from the external server.
The following external sync settings must be correct.
To verify these settings, open a Cloud Shell terminal and enter the following commands:
curlgcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{
"syncMode": "SYNC_MODE",
"syncParallelLevel": "SYNC_PARALLEL_LEVEL",
"mysqlSyncConfig": {
"initialSyncFlags": "SYNC_FLAGS"
}
}' \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/REPLICA_INSTANCE_ID/verifyExternalSyncSettings
example
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{
"syncMode": "online",
"syncParallelLevel": "optimal"
}' \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/myproject/instances/myreplica/verifyExternalSyncSettings
example w/ sync flags
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{
"syncMode": "online",
"syncParallelLevel": "optimal"
"mysqlSyncConfig": {
"initialSyncFlags": [{"name": "max-allowed-packet", "value": "1073741824"}, {"name": "hex-blob"}, {"name": "compress"}]
}
}' \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/MyProject/instances/replica-instance/verifyExternalSyncSettings
These calls return a list of type sql#externalSyncSettingErrorList
.
If the list is empty, then there are no errors. A response without errors appears like this:
{ "kind": "sql#externalSyncSettingErrorList" }Property Description SYNC_MODE Ensure that you can keep the Cloud SQL replica and the external server in sync after replication is set up. Sync modes include
EXTERNAL_SYNC_MODE_UNSPECIFIED
, ONLINE
, and OFFLINE
. SYNC_PARALLEL_LEVEL
Verify the setting that controls the speed at which data from tables of a database are transferred. The following values are available:
min:
Takes the lowest amount of compute resources on the database. This is the slowest speed for transferring data.optimal:
Provides a balanced performance with an optimal load on the database.max:
Provides the highest speed for transferring data, but this might cause an increased load on the database.Note: The default value for this parameter is optimal
because this setting provides a good speed to transfer the data and it has a reasonable impact on the database. We recommend that you use this value.
If you don't have permission to access the global read lock on the external server, as might be the case with Amazon RDS and Amazon Aurora, pause writes to your server as described in the following steps:
Return to the Logs Explorer. When you see the log as follows, stop writing to the database on your external server. In most cases, this is required only for a few seconds.
DUMP_IMPORT(START): Start importing data, please pause any write to the
external primary database.
When you see the following log entry in Logs Explorer, re-enable writing to the database on your external server.
DUMP_IMPORT(SYNC): Consistent state on primary and replica. Writes to the
external primary may resume.
After verifying that you can replicate from the external server, start the replication. The speed for performing the replication for the initial import process is up to 500 GB per hour. However, this speed can vary based on the machine tier, data disk size, network throughput, and nature of your database.
Caution: When you initiate an import from a database on a managed service, such as Amazon RDS or Amazon Aurora, Cloud SQL instructs you to stop writes for a few seconds so that the data dump is consistent. To avoid downtime for writes to your managed service primary instance, you can select a read replica as your replication source instead of the primary instance. During the phase when stopping writes is required, you can temporarily stop replication to the source read replica and then resume replication once the stop writes phase is over.During the initial import process, do not perform any DDL operations on the external server. Doing so could cause inconsistencies during the import. After the import process completes, the replica uses the binary logs on the external server to catch up to the current state of the external server.
curlgcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{
"syncMode": "SYNC_MODE",
"skipVerification": "SKIP_VERIFICATION",
"syncParallelLevel": "SYNC_PARALLEL_LEVEL",
"mysqlSyncConfig": {
"initialSyncFlags": "SYNC_FLAGS"
}
}' \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/REPLICA_INSTANCE_ID/startExternalSync
example
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{
"syncMode": "online",
"syncParallelLevel": "optimal"
}' \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/MyProject/instances/replica-instance/startExternalSync
example w/ sync flags
gcloud auth login
ACCESS_TOKEN="$(gcloud auth print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{
"syncMode": "online",
"syncParallelLevel": "optimal"
"skipVerification": false,
"mysqlSyncConfig": {
"initialSyncFlags": [{"name": "max-allowed-packet", "value": "1073741824"}, {"name": "hex-blob"}, {"name": "compress"}]
}
}' \
-X POST \
https://sqladmin.googleapis.com/sql/v1beta4/projects/MyProject/instances/replica-instance/startExternalSync
Property Description SYNC_MODE Verify that you can keep the Cloud SQL replica and the external server in sync after replication is set up. SKIP_VERIFICATION Whether to skip the built-in verification step before syncing your data. This parameter is recommended only if you have already verified your replication settings. SYNC_PARALLEL_LEVEL
Provide a setting that controls the speed at which data from tables of a database are transferred. The following values are available:
min:
Takes the lowest amount of compute resources on the database. This is the slowest speed for transferring data.optimal:
Provides a balanced performance with an optimal load on the database.max:
Provides the highest speed for transferring data, but this might cause an increased load on the database.Note: The default value for this parameter is optimal
because this setting provides a good speed to transfer the data and it has a reasonable impact on the database. We recommend that you use this value.
To migrate with custom database flags, you can use the following allowed flags:
For allowed values, see the MySQL public docs.
Monitor the migrationOnce you start replication from the external server, you need to monitor replication. To learn more, see Monitoring replication. You can then complete your migration.
TroubleshootConsider the following troubleshooting options:
Issue Troubleshooting Read replica didn't start replicating on creation. There's probably a more specific error in the log files. Inspect the logs in Cloud Logging to find the actual error. Unable to create read replica - invalidFlagValue error. One of the flags in the request is invalid. It could be a flag you provided explicitly or one that was set to a default value.First, check that the value of the max_connections
flag is greater than or equal to the value on the primary.
If the max_connections
flag is set appropriately, inspect the logs in Cloud Logging to find the actual error.
If the error is: set Service Networking service account as servicenetworking.serviceAgent role on consumer project
, then disable and re-enable the Service Networking API
. This action creates the service account necessary to continue with the process.
Restart the replica instance to reclaim the temporary memory space.
Replication stopped. The maximum storage limit was reached and automatic storage increase isn't enabled. Edit the instance to enable automatic storage increase
.
DELETE ... WHERE field < 50000000
cause replication lag with row-based replication since a huge number of updates are piled up on the replica.Some possible solutions include:
See Tips for working with flags for more information about this flag.
To avoid a long transaction, some possible solutions include:
On the primary instance that's displaying the error message, set the parallel replication flags:
binlog_transaction_dependency_tracking
and transaction_write_set_extraction
flags:
binlog_transaction_dependency_tracking=COMMIT_ORDER
transaction_write_set_extraction=OFF
slave_pending_jobs_size_max
flag:
slave_pending_jobs_size_max=33554432
transaction_write_set_extraction
flag:
transaction_write_set_extraction=XXHASH64
binlog_transaction_dependency_tracking
flag:
binlog_transaction_dependency_tracking=WRITESET
Recreate the replica after stopping all running queries.
Additionally, for MySQL, also consider the following options:
Issue TroubleshootingLost connection to MySQL server during query when dumping table
. The source may have become unavailable, or the dump contained packets too large.
Make sure the external primary is available to connect. You can also modify the values of the net_read_timeout and net_write_timeout flags on the source instance to stop the error. For more information on the allowable values for these flags, see Configure database flags.
To learn more about using mysqldump
flags for managed import migration, see Allowed and default initial sync flags
Make sure the replication flags such as binlog-do-db
, binlog-ignore-db
, replicate-do-db
or replicate-ignore-db
are not set in a conflicting way.
Run the command show master status
on the primary instance to see the current settings.
mysql.err log
files.SHOW SLAVE STATUS
, and check for the following fields in the output:
mysqld check failed: data disk is full
. The data disk of the replica instance is full.
Increase the disk size of the replica instance. You can either manually increase the disk size or enable auto storage increase.
Review your replication logsWhen you verify your replication settings, logs are produced.
You can view these logs by following these steps:
Go to the Logs Viewer in the Google Cloud console.
replication-setup.log
log file.If the Cloud SQL replica is unable to connect to the external server, confirm the following:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4