Stay organized with collections Save and categorize content based on your preferences.
Important: If you're using the Database Migration Service API to create migration jobs, then make sure you use themysqldump
utility to update the dump when you run the REST API call. For more information, see Manage migration jobs using the API.
You can run the mysqldump utility directly against your MySQL database, using whatever options you require. However, if you're exporting to import the data into a Cloud SQL database, then use the mysqldump utility with the following flags:
--databases
Specify an explicit list of databases to export. This list must not contain the system databases (sys
, mysql
, performance_schema
, and information_schema
).--hex-blob
If your database contains any binary fields, then you must use this flag to ensure that your binary fields are imported correctly.--single-transaction
Starts a transaction before running. Rather than lock the entire database, this lets mysqldump read the database in the current state, making for a consistent data dump.--routines
To include stored procedures and functions.When using mysqldump
version 8 or later to export MySQL databases versions earlier than 8:--column-statistics=0
This flag removes the COLUMN_STATISTICS table from the database export to avoid the Unknown table 'COLUMN_STATISTICS' in information_schema (1109)
error. For more information, see Diagnose issues.
It's also recommended to use the following flags:
--no-autocommit
--default-character-set=utf8mb4
--master-data
From a machine with network connectivity to your MySQL server, run the following command:
mysqldump \
-h [SOURCE_ADDR] -P [SOURCE_PORT] -u [USERNAME] -p \
--databases [DBS] \
--hex-blob \
--no-autocommit \
--default-character-set=utf8mb4 \
--master-data=1 \
--single-transaction \
--routines \
| gzip \
| gcloud storage cp - gs://[BUCKET_NAME]/[DUMP_FILENAME].gz
Note: If your source database server doesn't support GTID, then use the --master-data=1
property; otherwise, don't use this property. Also, if your source database server supports GTID, then use the --set-gtid-purged=on
property; otherwise, don't use this property.
If the source of the migration is a Relational Database Service (RDS) for MySQL:
master-data
property isn't supported.--set-gtid-purged=on
property; otherwise, don't use this property.This command might look like the following example:
mysqldump \
-h [SOURCE_ADDR] -P [SOURCE_PORT] -u [USERNAME] -p \
--databases [DBS] \
--hex-blob \
--no-autocommit \
--default-character-set=utf8mb4 \
--set-gtid-purged=on \
--single-transaction \
--routines \
| gzip \
| gcloud storage cp - gs://[BUCKET_NAME]/[DUMP_FILENAME].gz
Also, you should configure RDS instances to retain binlogs longer. This command might look like the following example:
# Sets the retention period to one week.
call mysql.rds_set_configuration('binlog retention hours', 168);
Replace [PROPERTIES_IN_BRACKETS] with the following values:
Property Value [SOURCE_ADDR] The IPv4 address or hostname for the source database server. [SOURCE_PORT] The port for the source database server. [USERNAME] The MySQL user account. [DBS] A space-separated list of the databases on the source database server to include in the dump. Use theSHOW DATABASES
MySQL command to list your databases. [BUCKET_NAME] The bucket in Cloud Storage that's created by the user and that's used for storing the dump file (for example, replica-bucket
). [DUMP_FILENAME] The dump's filename, ending with a .gz
file extension (for example, source-database.sql.gz
). Note: The gzip
command causes the dump file to be compressed. If your database contains data that does not compress well, such as binary incompressible data or JPG images, then remove | gzip
from the command.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["If using the Database Migration Service API, ensure `mysqldump` is used to update the dump when making REST API calls."],["When exporting to Cloud SQL, use `mysqldump` with flags such as `--databases` (excluding system databases), `--hex-blob` for binary fields, `--single-transaction` for consistency, and `--routines` for stored procedures/functions, as well as `--column-statistics=0` if you are on `mysqldump` version 8 or later and the database is older than version 8."],["While `mysqldump` is running, avoid DDL operations on the database to prevent inconsistencies in the export file."],["When dealing with Relational Database Service (RDS) for MySQL, note that the `master-data` property is unsupported, and if the source database supports GTID, use `--set-gtid-purged=on`; also be sure to use GTID when manually migrating."],["When calling `mysqldump`, make sure to properly replace the `[PROPERTIES_IN_BRACKETS]` values with proper values such as `[SOURCE_ADDR]`, `[SOURCE_PORT]`, `[USERNAME]`, `[DBS]`, `[BUCKET_NAME]`, and `[DUMP_FILENAME]`, and be mindful that the `gzip` command may not be ideal if your database contains non-compressible data."]]],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4