Stay organized with collections Save and categorize content based on your preferences.
This document is intended for database administrators, cloud architects, and operations professionals interested in deploying a highly available MySQL topology on Google Kubernetes Engine.
Follow this tutorial to learn how to deploy a MySQL InnoDB Cluster and a MySQL InnoDB ClusterSet, in addition to MySQL Router middleware on your GKE cluster, and how to perform upgrades.
Objectives In this tutorial, you will learn how to:The following sections describe the architecture of the solution you will build in this tutorial.
MySQL InnoDB ClusterIn your regional GKE cluster, using a StatefulSet, you deploy a MySQL database instance with the necessary naming and configuration to create a MySQL InnoDB Cluster. To provide fault tolerance and high availability, you deploy three database instance Pods. This ensures that the majority of Pods on different zones are available at any given time for a successful primary election using a consensus protocol, and makes your MySQL InnoDB Cluster tolerant of single zonal failures.
Figure 1: Example architecture of a single MySQL InnoDB ClusterOnce deployed, you designate one Pod as the primary instance to serve both read and write operations. The other two Pods are secondary read-only replicas. If the primary instance experiences an infrastructure failure, you can promote one of these two replica Pods to become the primary.
In a separate namespace, you deploy three MySQL Router Pods to provide connection routing for improved resilience. Instead of directly connecting to the database service, your applications connect to MySQL Router Pods. Each Router Pod is aware of the status and purpose of each MySQL InnoDB Cluster Pod, and routes application operations to the respective healthy Pod. The routing state is cached in the Router Pods and updated from the cluster metadata stored on each node of the MySQL InnoDB Cluster. In the case of an instance failure, the Router adjusts the connection routing to a live instance.
MySQL InnoDB ClusterSetYou can create a MySQL InnoDB ClusterSet from an initial MySQL InnoDB Cluster. This lets you increase disaster tolerance if the primary cluster is no longer available.
Figure 2: Example multi-region ClusterSet architecture which contains one primary cluster and one replica clusterIf the MySQL InnoDB Cluster primary instance is no longer available, you can promote a replica cluster in the ClusterSet to primary. When using MySQL Router middleware, your application does not need to track the health of the primary database instance. Routing is adjusted to send connections to the new primary after the election has occurred. However, it is your responsibility to ensure that applications connecting to your MySQL Router middleware follow best practices for resilience, so that connections are retried if an error occurs during cluster failover.
CostsIn this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage, use the pricing calculator.
New Google Cloud users might be eligible for a
free trial.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Before you begin Set up your projectMake sure that you have the following role or roles on the project: role/storage.objectViewer, role/logging.logWriter, role/artifactregistry.Admin, roles/container.clusterAdmin, role/container.serviceAgent, roles/serviceusage.serviceUsageAdmin, roles/iam.serviceAccountAdmin
Check for the rolesIn the Google Cloud console, go to the IAM page.
Go to IAMIn the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator.
In the Google Cloud console, go to the IAM page.
Go to IAMIn the New principals field, enter your user identifier. This is typically the email address for a Google Account.
In this tutorial, you use Cloud Shell to manage resources hosted on Google Cloud. Cloud Shell comes preinstalled with Docker and the kubectl
and gcloud CLI.
To use Cloud Shell to set up your environment:
Set environment variables.
export PROJECT_ID=PROJECT_ID
export CLUSTER_NAME=gkemulti-west
export CONTROL_PLANE_LOCATION=CONTROL_PLANE_LOCATION
Replace the following values:
us-west1
. Typically, you want a region that is close to you.Set the default environment variables.
gcloud config set project PROJECT_ID
gcloud config set compute/region CONTROL_PLANE_LOCATION
Clone the code repository.
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
Change to the working directory.
cd kubernetes-engine-samples/databases/gke-stateful-mysql/kubernetes
In this section, you create a regional GKE cluster. Unlike a zonal cluster, a regional cluster's control plane is replicated into several zones, so an outage in a single zone doesn't make the control plane unavailable.
To create a GKE cluster, follow these steps:
AutopilotIn Cloud Shell, create a GKE Autopilot cluster in the us-west1
region.
gcloud container clusters create-auto $CLUSTER_NAME \
--location=$CONTROL_PLANE_LOCATION
Get the GKE cluster credentials.
gcloud container clusters get-credentials $CLUSTER_NAME \
--location=$CONTROL_PLANE_LOCATION
Deploy a Service across three zones. This tutorial uses a Kubernetes Deployment. A Deployment is a Kubernetes API object that lets you run multiple replicas of Pods that are distributed among the nodes in a cluster.
kubectl apply -f prepare-for-ha.yaml
By default, Autopilot provisions resources in two zones. The Deployment defined in prepare-for-ha.yaml
ensures that Autopilot provisions nodes across three zones in your cluster, by setting replicas:3
, podAntiAffinity
with requiredDuringSchedulingIgnoredDuringExecution
, and topologyKey: "topology.kubernetes.io/zone"
.
Check the status of the Deployment.
kubectl get deployment prepare-three-zone-ha --watch
When you see three Pods in the ready state, cancel this command with CTRL+C
. The output is similar to the following:
NAME READY UP-TO-DATE AVAILABLE AGE
prepare-three-zone-ha 0/3 3 0 9s
prepare-three-zone-ha 1/3 3 1 116s
prepare-three-zone-ha 2/3 3 2 119s
prepare-three-zone-ha 3/3 3 3 2m16s
Run this script to validate that your Pods have been deployed across three zones.
bash ../scripts/inspect_pod_node.sh default
Each line of the output corresponds to a Pod, and the second column indicates the zone. The output is similar to the following:
gk3-gkemulti-west1-default-pool-eb354e2d-z6mv us-west1-b prepare-three-zone-ha-7885d77d9c-8f7qb
gk3-gkemulti-west1-nap-25b73chq-739a9d40-4csr us-west1-c prepare-three-zone-ha-7885d77d9c-98fpn
gk3-gkemulti-west1-default-pool-160c3578-bmm2 us-west1-a prepare-three-zone-ha-7885d77d9c-phmhj
In Cloud Shell, create a GKE Standard cluster in the us-west1
region.
gcloud container clusters create $CLUSTER_NAME \
--location=$CONTROL_PLANE_LOCATION \
--machine-type="e2-standard-2" \
--disk-type="pd-standard" \
--num-nodes="5"
Get the GKE cluster credentials.
gcloud container clusters get-credentials $CLUSTER_NAME \
--location=$CONTROL_PLANE_LOCATION
In this section, you deploy one MySQL StatefulSet. A StatefulSet is a Kubernetes controller that maintains a persistent unique identity for each of its Pods.
Each StatefulSet consists of three MySQL replicas.
To deploy the MySQL StatefulSet, follow these steps:
Create a namespace for the StatefulSet.
kubectl create namespace mysql1
Create the MySQL secret.
kubectl apply -n mysql1 -f secret.yaml
The password is deployed with each Pod, and is used by management scripts and commands for MySQL InnoDB Cluster and ClusterSet deployment in this tutorial.
Create the StorageClass.
kubectl apply -n mysql1 -f storageclass.yaml
This storage class uses the pd-balanced
Persistent Disk type that balances performance and cost. The volumeBindingMode
field is set to WaitForFirstConsumer
meaning that GKE delays provisioning of a PersistentVolume until the Pod is created. This setting ensures that the disk is provisioned in the same zone where the Pod is scheduled.
Deploy the StatefulSet of MySQL instance Pods.
kubectl apply -n mysql1 -f c1-mysql.yaml
This command deploys the StatefulSet consisting of three replicas. In this tutorial, the primary MySQL cluster is deployed across three zones in us-west1
. The output is similar to the following:
service/mysql created
statefulset.apps/dbc1 created
In this tutorial, the resource limits and requests are set to minimal values to save cost. When planning for a production workload, make sure to set these values appropriately for your organization's needs.
Verify the StatefulSet is created successfully.
kubectl get statefulset -n mysql1 --watch
It can take about 10 minutes for the StatefulSet to be ready.
When all three pods are in a ready state, exit the command using Ctrl+C
. If you see PodUnscheduleable
errors due to insufficient CPU or memory, wait a few minutes for the control plane to resize to accommodate the large workload.
The output is similar to the following:
NAME READY AGE
dbc1 1/3 39s
dbc1 2/3 50s
dbc1 3/3 73s
To inspect the placement of your Pods on the GKE cluster nodes, run this script:
bash ../scripts/inspect_pod_node.sh mysql1 mysql
The output shows the Pod name, the GKE node name, and the zone where the node is provisioned, and looks similar to the following:
gke-gkemulti-west-5-default-pool-4bcaca65-jch0 us-west1-b dbc1-0
gke-gkemulti-west-5-default-pool-1ac6e8b5-ddjx us-west1-c dbc1-1
gke-gkemulti-west-5-default-pool-1f5baa66-bf8t us-west1-a dbc1-2
The columns in the output represent the hostname, cloud zone, and Pod name, respectively.
The topologySpreadConstraints
policy in the StatefulSet specification (c1-mysql.yaml
) directs the scheduler to place the Pods evenly across the failure domain (topology.kubernetes.io/zone
).
The podAntiAffinity
policy enforces the constraint that Pods are required to not be placed on the same GKE cluster node (kubernetes.io/hostname
). For the MySQL instance Pods, this policy results in the Pods being deployed evenly across the three zones in the Google Cloud region. This placement enables high availability of the MySQL InnoDB Cluster by placing each database instance in a separate failure domain.
To configure a MySQL InnoDB Cluster, follow these steps:
In the Cloud Shell terminal, set the group replication configurations for the MySQL instances to be added to your cluster.
bash ../scripts/c1-clustersetup.sh
The script will remotely connect into each of the three MySQL instances to set and persist the following environment variables:
group_replication_ip_allowlist
: allows the instance within the cluster to connect to any instance in the group.binlog_transaction_dependency_tracking='WRITESET'
: allows parallelized transactions which won't conflict.In MySQL versions earlier than 8.0.22, use group_replication_ip_whitelist
instead of group_replication_ip_allowlist
.
Open a second terminal, so that you do not need to create a shell for each Pod.
Connect to MySQL Shell on the Pod dbc1-0
.
kubectl -n mysql1 exec -it dbc1-0 -- \
/bin/bash \
-c 'mysqlsh --uri="root:$MYSQL_ROOT_PASSWORD@dbc1-0.mysql.mysql1.svc.cluster.local"'
Verify the MySQL group replication allowlist for connecting to other instances.
\sql SELECT @@group_replication_ip_allowlist;
The output is similar to the following:
+----------------------------------+
| @@group_replication_ip_allowlist |
+----------------------------------+
| mysql.mysql1.svc.cluster.local |
+----------------------------------+
Verify the server-id
is unique on each of the instances.
\sql SELECT @@server_id;
The output is similar to the following:
+-------------+
| @@server_id |
+-------------+
| 21 |
+-------------+
Configure each instance for MySQL InnoDB Cluster usage and create an administrator account on each instance.
\js
dba.configureInstance('root@dbc1-0.mysql.mysql1.svc.cluster.local', {password: os.getenv("MYSQL_ROOT_PASSWORD"),clusterAdmin: 'icadmin', clusterAdminPassword: os.getenv("MYSQL_ADMIN_PASSWORD")});
dba.configureInstance('root@dbc1-1.mysql.mysql1.svc.cluster.local', {password: os.getenv("MYSQL_ROOT_PASSWORD"),clusterAdmin: 'icadmin', clusterAdminPassword: os.getenv("MYSQL_ADMIN_PASSWORD")});
dba.configureInstance('root@dbc1-2.mysql.mysql1.svc.cluster.local', {password: os.getenv("MYSQL_ROOT_PASSWORD"),clusterAdmin: 'icadmin', clusterAdminPassword: os.getenv("MYSQL_ADMIN_PASSWORD")});
All instances must have the same username and password in order for the MySQL InnoDB Cluster to function properly. Each command produces output similar to the following:
...
The instance 'dbc1-2.mysql:3306' is valid to be used in an InnoDB cluster.
Cluster admin user 'icadmin'@'%' created.
The instance 'dbc1-2.mysql.mysql1.svc.cluster.local:3306' is already
ready to be used in an InnoDB cluster.
Successfully enabled parallel appliers.
Verify that the instance is ready to be used in a MySQL InnoDB Cluster.
dba.checkInstanceConfiguration()
The output is similar to the following:
...
The instance 'dbc1-0.mysql.mysql1.svc.cluster.local:3306' is valid to be used in an InnoDB cluster.
{
"status": "ok"
}
Optionally, you can connect to each MySQL instance and repeat this command. For example, run this command to check the status on the dbc1-1
instance:
kubectl -n mysql1 exec -it dbc1-0 -- \
/bin/bash \
-c 'mysqlsh --uri="root:$MYSQL_ROOT_PASSWORD@dbc1-1.mysql.mysql1.svc.cluster.local" \
--js --execute "dba.checkInstanceConfiguration()"'
Next, create the MySQL InnoDB Cluster using the MySQL Admin createCluster
command. Start with the dbc1-0
instance, which will be the primary instance for the cluster, then add two additional replicas to the cluster.
To initialize the MySQL InnoDB Cluster, follow these steps:
Create the MySQL InnoDB Cluster.
var cluster=dba.createCluster('mycluster');
Running the createCluster
command triggers these operations:
This command initializes a MySQL InnoDB Cluster with the host dbc1-0
as the primary. The cluster reference is stored in the cluster variable.
The output looks similar to the following:
A new InnoDB cluster will be created on instance 'dbc1-0.mysql:3306'.
Validating instance configuration at dbc1-0.mysql:3306...
This instance reports its own address as dbc1-0.mysql.mysql1.svc.cluster.local:3306
Instance configuration is suitable.
NOTE: Group Replication will communicate with other instances using
'dbc1-0.mysql:33061'. Use the localAddress
option to override.
Creating InnoDB cluster 'mycluster' on
'dbc1-0.mysql.mysql1.svc.cluster.local:3306'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL
instances.
At least 3 instances are needed for the cluster to be able to withstand
up to one server failure.
Add the second instance to the cluster.
cluster.addInstance('icadmin@dbc1-1.mysql', {password: os.getenv("MYSQL_ROOT_PASSWORD"), recoveryMethod: 'clone'});
Add the remaining instance to the cluster.
cluster.addInstance('icadmin@dbc1-2.mysql', {password: os.getenv("MYSQL_ROOT_PASSWORD"), recoveryMethod: 'clone'});
The output is similar to the following:
...
The instance 'dbc1-2.mysql:3306' was successfully added to the cluster.
Verify the cluster's status.
cluster.status()
This command shows the status of the cluster. The topology consists of three hosts, one primary and two secondary instances. Optionally, you can call cluster.status({extended:1})
.
The output is similar to the following:
{
"clusterName": "mysql1",
"defaultReplicaSet": {
"name": "default",
"primary": "dbc1-0.mysql:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"dbc1-0.mysql:3306": {
"address": "dbc1-0.mysql:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.28"
},
"dbc1-1.mysql:3306": {
"address": "dbc1-1.mysql:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.28"
},
"dbc1-2.mysql:3306": {
"address": "dbc1-2.mysql:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.28"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "dbc1-0.mysql:3306"
}
Optionally, you can call cluster.status({extended:1})
to obtain additional status details.
To create a sample database, follow these steps:
Create a database and load data into the database.
\sql
create database loanapplication;
use loanapplication
CREATE TABLE loan (loan_id INT unsigned AUTO_INCREMENT PRIMARY KEY, firstname VARCHAR(30) NOT NULL, lastname VARCHAR(30) NOT NULL , status VARCHAR(30) );
Insert sample data into the database. To insert data, you must be connected to the primary instance of the cluster.
INSERT INTO loan (firstname, lastname, status) VALUES ( 'Fred','Flintstone','pending');
INSERT INTO loan (firstname, lastname, status) VALUES ( 'Betty','Rubble','approved');
Verify that the table contains the three rows inserted in the previous step.
SELECT * FROM loan;
The output is similar to the following:
+---------+-----------+------------+----------+
| loan_id | firstname | lastname | status |
+---------+-----------+------------+----------+
| 1 | Fred | Flintstone | pending |
| 2 | Betty | Rubble | approved |
+---------+-----------+------------+----------+
2 rows in set (0.0010 sec)
You can create a MySQL InnoDB ClusterSet to manage replication from your primary cluster to replica clusters, using a dedicated ClusterSet replication channel.
A MySQL InnoDB ClusterSet provides disaster tolerance for MySQL InnoDB Cluster deployments by linking a primary MySQL InnoDB Cluster with one or more replicas of itself in alternate locations, such as multiple zones and multiple regions.
If you closed MySQL Shell, create a new shell by running this command in a new Cloud Shell terminal:
kubectl -n mysql1 exec -it dbc1-0 -- \
/bin/bash -c 'mysqlsh \
--uri="root:$MYSQL_ROOT_PASSWORD@dbc1-0.mysql.mysql1.svc.cluster.local"'
To create a MySQL InnoDB ClusterSet, follow these steps:
In your MySQL Shell terminal, obtain a cluster object.
\js
cluster=dba.getCluster()
The output is similar to the following:
<Cluster:mycluster>
Initialize a MySQL InnoDB ClusterSet with the existing MySQL InnoDB Cluster stored in the cluster object as the primary.
clusterset=cluster.createClusterSet('clusterset')
The output is similar to the following:
A new ClusterSet will be created based on the Cluster 'mycluster'.
* Validating Cluster 'mycluster' for ClusterSet compliance.
* Creating InnoDB ClusterSet 'clusterset' on 'mycluster'...
* Updating metadata...
ClusterSet successfully created. Use ClusterSet.createReplicaCluster() to add Replica Clusters to it.
<ClusterSet:clusterset>
Check the status of your MySQL InnoDB ClusterSet.
clusterset.status()
The output is similar to the following:
{
"clusters": {
"mycluster": {
"clusterRole": "PRIMARY",
"globalStatus": "OK",
"primary": "dbc1-0.mysql:3306"
}
},
"domainName": "clusterset",
"globalPrimaryInstance": "dbc1-0.mysql:3306",
"primaryCluster": "mycluster",
"status": "HEALTHY",
"statusText": "All Clusters available."
}
Optionally, you can call clusterset.status({extended:1})
to obtain additional status details, including information about the cluster.
Exit MySQL Shell.
\q
You can deploy a MySQL Router to direct client application traffic to the proper clusters. Routing is based on the connection port of the application issuing a database operation:
When you start a MySQL Router, it is bootstrapped against the MySQL InnoDB ClusterSet deployment. The MySQL Router instances connected with the MySQL InnoDB ClusterSet are aware of any controlled switchovers or emergency failovers and direct traffic to the new primary cluster.
To deploy a MySQL Router, follow these steps:
In the Cloud Shell terminal, deploy the MySQL Router.
kubectl apply -n mysql1 -f c1-router.yaml
The output is similar to the following:
configmap/mysql-router-config created
service/mysql-router created
deployment.apps/mysql-router created
Check the readiness of the MySQL Router deployment.
kubectl -n mysql1 get deployment mysql-router --watch
When all three Pods are ready, the output is similar to the following:
NAME READY UP-TO-DATE AVAILABLE AGE
mysql-router 3/3 3 0 3m36s
If you see a PodUnschedulable
error in the console, wait a minute or two while GKE provisions more nodes. Refresh, and you should see 3/3 OK
.
Start MySQL Shell on any member of the existing cluster.
kubectl -n mysql1 exec -it dbc1-0 -- \
/bin/bash -c 'mysqlsh --uri="root:$MYSQL_ROOT_PASSWORD@dbc1-0.mysql"'
This command connects to the dbc1-0
Pod, then starts a shell connected to the dbc1-0
MySQL instance.
Verify the router configuration.
clusterset=dba.getClusterSet()
clusterset.listRouters()
The output is similar to the following:
{
"domainName": "clusterset",
"routers": {
"mysql-router-7cd8585fbc-74pkm::": {
"hostname": "mysql-router-7cd8585fbc-74pkm",
"lastCheckIn": "2022-09-22 23:26:26",
"roPort": 6447,
"roXPort": 6449,
"rwPort": 6446,
"rwXPort": 6448,
"targetCluster": null,
"version": "8.0.27"
},
"mysql-router-7cd8585fbc-824d4::": {
...
},
"mysql-router-7cd8585fbc-v2qxz::": {
...
}
}
}
Exit MySQL Shell.
\q
Run this script to inspect the placement of the MySQL Router Pods.
bash ../scripts/inspect_pod_node.sh mysql1 | sort
The script shows the node and Cloud Zone placement of the all of the Pods in the mysql1
namespace, where the output is similar to the following:
gke-gkemulti-west-5-default-pool-1ac6e8b5-0h9v us-west1-c mysql-router-6654f985f5-df97q
gke-gkemulti-west-5-default-pool-1ac6e8b5-ddjx us-west1-c dbc1-1
gke-gkemulti-west-5-default-pool-1f5baa66-bf8t us-west1-a dbc1-2
gke-gkemulti-west-5-default-pool-1f5baa66-kt03 us-west1-a mysql-router-6654f985f5-qlfj9
gke-gkemulti-west-5-default-pool-4bcaca65-2l6s us-west1-b mysql-router-6654f985f5-5967d
gke-gkemulti-west-5-default-pool-4bcaca65-jch0 us-west1-b dbc1-0
You can observe that the MySQL Router Pods are distributed equally across the zones; that is, not placed on the same node as a MySQL Pod, or on the same node as another MySQL Router Pod.
Updates for both MySQL and Kubernetes are released on a regular schedule. Follow operational best practices to update your software environment regularly. By default, GKE manages cluster and node pool upgrades for you. Kubernetes and GKE also provide additional features to facilitate MySQL software upgrades.
Plan for GKE upgradesYou can take proactive steps and set configurations to mitigate risk and facilitate a smoother cluster upgrade when you are running stateful services, including:
Standard clusters: Follow GKE best practices for upgrading clusters. Choose an appropriate upgrade strategy to ensure the upgrades happen during the period of the maintenance window:
To learn more, see Upgrade a cluster running a stateful workload. Autopilot clusters are automatically upgraded, based on the release channel you selected.
Use maintenance windows to ensure upgrades happen when you intend them. Before the maintenance window, ensure your database backups are successful.
Before allowing traffic to the upgraded MySQL nodes, use Readiness Probes and Liveness Probes to ensure they are ready for traffic.
Create Probes that assess whether replication is in sync before accepting traffic. This can be done through custom scripts, depending on the complexity and scale of your database.
When a MySQL InnoDB Cluster is running on GKE, there must be a sufficient number of instances running at any time to meet the quorum requirement.
In this tutorial, given a MySQL cluster of three instances, two instances must be available to form a quorum. A PodDisruptionBudget
policy allows you to limit the number of Pods that can be terminated at any given time. This is useful for both steady state operations of your stateful services and for cluster upgrades.
To ensure that a limited number of Pods are concurrently disrupted, you set the PDB for your workload to maxUnavailable: 1
. This ensures that at any point in the service operation, no more than one Pod is not running.
minAvailable
value to ensure that a minimum number of Pods are running. However, if using minAvailable
alone, to guarantee cluster availability, make sure that the value is increased if the size of the cluster increases. In contrast, the maxUnavailable
value provides quorum protection for the cluster without any changes; the tradeoff is that only one instance can be disrupted for upgrade at a time.
The following PodDisruptionBudget
policy manifest sets the maximum unavailable Pods to one for your MySQL application.
To apply the PDB policy to your cluster, follow these steps:
Apply the PDB policy using kubectl
.
kubectl apply -n mysql1 -f mysql-pdb-maxunavailable.yaml
View the status of the PDB.
kubectl get poddisruptionbudgets -n mysql1 mysql-pdb -o yaml
In the status
section of the output, see the currentHealthy
and desiredHealthy
Pods counts. The output is similar to the following:
status:
...
currentHealthy: 3
desiredHealthy: 2
disruptionsAllowed: 1
expectedPods: 3
...
Kubernetes and GKE provide features to facilitate upgrades for the MySQL binary. However, you need to perform some operations to prepare for the upgrades.
Keep the following considerations in mind before you begin the upgrade process:
In a production environment, you should follow these best practices:
To keep this tutorial simple, you will not build and persist a container image; instead, you use the public MySQL images.
Deploy the upgraded MySQL binaryTo perform the MySQL binary upgrade, you issue a declarative command that modifies the image version of the StatefulSet resource. GKE performs the necessary steps to stop the current Pod, deploy a new Pod with the upgraded binary, and attach the persistent disk to the new Pod.
Verify that the PDB was created.
kubectl get poddisruptionbudgets -n mysql1
Get the list of stateful sets.
kubectl get statefulsets -n mysql1
Get the list of running Pods using the app
label.
kubectl get pods --selector=app=mysql -n mysql1
Update the MySQL image in the stateful set.
kubectl -n mysql1 \
set image statefulset/dbc1 \
mysql=mysql/mysql-server:8.0.30
The output is similar to the following:
statefulset.apps/mysql image updated
Check the status of the terminating Pods and new Pods.
kubectl get pods --selector=app=mysql -n mysql1
During the upgrade, you can verify the status of the rollout, the new Pods, and the existing Service.
Confirm the upgrade by running the rollout status
command.
kubectl rollout status statefulset/dbc1 -n mysql1
The output is similar to the following:
partitioned roll out complete: 3 new pods have been updated...
Confirm the image version by inspecting the stateful set.
kubectl get statefulsets -o wide -n mysql1
The output is similar to the following:
NAME READY AGE CONTAINERS IMAGES
dbc1 3/3 37m mysql mysql/mysql-server:8.0.30
Check the status of the cluster.
kubectl -n mysql1 \
exec -it dbc1-0 -- \
/bin/bash \
-c 'mysqlsh \
--uri="root:$MYSQL_ROOT_PASSWORD@dbc1-1.mysql.mysql1.svc.cluster.local" \
--js \
--execute "print(dba.getClusterSet().status({extended:1})); print(\"\\n\")"'
For each cluster instance, look for the status and version values in the output. The output is similar to the following:
...
"status": "ONLINE",
"version": "8.0.30"
...
When you revert the deployment of an upgraded binary version, the rollout process is reversed and a new set of Pods is deployed with the previous image version.
To revert the deployment to the previous working version, use the rollout undo
command:
kubectl rollout undo statefulset/dbc1 -n mysql1
The output is similar to the following:
statefulset.apps/dbc1 rolled back
Scale your database cluster horizontally
To scale your MySQL InnoDB Cluster horizontally, you add additional nodes to the GKE cluster node pool (only required if you are using Standard), deploy additional MySQL instances, then add each instance to the existing MySQL InnoDB Cluster.
Add nodes to your Standard clusterThis operation is not needed if you are using a Autopilot cluster.
To add nodes to your Standard cluster, follow the instructions below for Cloud Shell or the Google Cloud console. For detailed steps, see Resize a node pool
gcloudIn Cloud Shell, resize the default node pool to eight instances in each managed instance group.
gcloud container clusters resize ${CLUSTER_NAME} \
--node-pool default-pool \
--num-nodes=8
Console
To add nodes to your Standard cluster:
gkemulti-west1
Cluster page in the Google Cloud console.Number of nodes
value from 5 to 8 nodes.To deploy additional MySQL Pods to scale your cluster horizontally, follow these steps:
In Cloud Shell, update the number of replicas in the MySQL deployment from three replicas to five replicas.
kubectl scale -n mysql1 --replicas=5 -f c1-mysql.yaml
Verify the progress of the deployment.
kubectl -n mysql1 get pods --selector=app=mysql -o wide
To determine whether the Pods are ready, use the --watch
flag to watch the deployment. If you are using Autopilot clusters and see Pod Unschedulable
errors, this might indicate GKE is provisioning nodes to accommodate the additional Pods.
Configure the group replication settings for the new MySQL instances to add to the cluster
bash ../scripts/c1-clustersetup.sh 3 4
The script submits the commands to the instances running on the Pods with ordinals 3 through 4.
Open MySQL Shell.
kubectl -n mysql1 \
exec -it dbc1-0 -- \
/bin/bash \
-c 'mysqlsh \
--uri="root:$MYSQL_ROOT_PASSWORD@dbc1-0.mysql"'
Configure the two new MySQL instances.
dba.configureInstance('root:$MYSQL_ROOT_PASSWORD@dbc1-3.mysql', {password: os.getenv("MYSQL_ROOT_PASSWORD"),clusterAdmin: 'icadmin', clusterAdminPassword: os.getenv("MYSQL_ADMIN_PASSWORD")});
dba.configureInstance('root:$MYSQL_ROOT_PASSWORD@dbc1-4.mysql', {password: os.getenv("MYSQL_ROOT_PASSWORD"),clusterAdmin: 'icadmin', clusterAdminPassword: os.getenv("MYSQL_ADMIN_PASSWORD")});
The commands check if the instance is configured properly for MySQL InnoDB Cluster usage and perform the necessary configuration changes.
Add one of the new instances to the primary cluster.
cluster = dba.getCluster()
cluster.addInstance('icadmin@dbc1-3.mysql', {password: os.getenv("MYSQL_ROOT_PASSWORD"), recoveryMethod: 'clone'});
Add a second new instance to the primary cluster.
cluster.addInstance('icadmin@dbc1-4.mysql', {password: os.getenv("MYSQL_ROOT_PASSWORD"), recoveryMethod: 'clone'});
Obtain the ClusterSet status, which also includes the Cluster status.
clusterset = dba.getClusterSet()
clusterset.status({extended: 1})
The output is similar to the following:
"domainName": "clusterset",
"globalPrimaryInstance": "dbc1-0.mysql:3306",
"metadataServer": "dbc1-0.mysql:3306",
"primaryCluster": "mycluster",
"status": "HEALTHY",
"statusText": "All Clusters available."
Exit MySQL Shell.
\q
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the projectThe easiest way to avoid billing is to delete the project you created for the tutorial.
Caution: Deleting a project has the following effects:appspot.com
URL, delete selected resources inside the project instead of deleting the whole project.If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-12 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-12 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4