This page shows you how to route traffic across multiple Google Kubernetes Engine (GKE) clusters in different regions using Multi Cluster Ingress, with an example using two clusters.
For a detailed comparison between Multi Cluster Ingress (MCI), Multi-cluster Gateway (MCG), and load balancer with Standalone Network Endpoint Groups (LB and Standalone NEGs), see Choose your multi-cluster load balancing API for GKE.
To learn more about deploying Multi Cluster Ingress, see Deploying Ingress across clusters.
These steps require elevated permissions and should be performed by a GKE administrator.
Before you beginBefore you start, make sure that you have performed the following tasks:
gcloud components update
. Note: For existing gcloud CLI installations, make sure to set the compute/region
property. If you use primarily zonal clusters, set the compute/zone
instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location
. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.Multi Cluster Ingress has the following requirements:
If you use Standard mode clusters, ensure that you meet the following requirements. Autopilot clusters already meet these requirements.
HttpLoadBalancing
add-on enabled. This add-on is enabled by default, you must not disable it.Multi Cluster Ingress has the following limitations:
mci-
that are not managed by Multi Cluster Ingress or they will be deleted. Google Cloud uses the prefix mci-[6 char hash]
to manage the Compute Engine resources that Multi Cluster Ingress deploys.In this exercise, you perform the following steps:
The following diagram shows what your environment will look like after you complete the exercise:
In the diagram, there are two GKE clusters named gke-us
and gke-eu
in the regions europe-west1
and us-central1
. The clusters are registered to a fleet so that the Multi Cluster Ingress controller can recognize them. A fleet lets you logically group and normalize your GKE clusters, making administration of infrastructure easier and enabling the use of multi-cluster features such as Multi Cluster Ingress. You can learn more about the benefits of fleets and how to create them in the fleet management documentation.
If Multi Cluster Ingress is the only GKE Enterprise capability that you are using, then we recommend that you use standalone pricing. If your project is using other GKE Enterprise on Google Cloud components or capabilities, you should enable the entire GKE Enterprise platform. This lets you use all GKE Enterprise features for a single per-vCPU charge.
The APIs that you must enable depend on the Multi Cluster Ingress pricing that you use.
anthos.googleapis.com
) is enabled, then your project is billed according to the number of cluster vCPUs and GKE Enterprise pricing.You can change the Multi Cluster Ingress billing model from standalone to GKE Enterprise, or from GKE Enterprise to standalone at any time without impacting Multi Cluster Ingress resources or traffic.
Warning: Don't disable the GKE Enterprise API if there are other active GKE Enterprise components in use in your project or your active components might experience an outage. Standalone pricingTo enable standalone pricing, perform the following steps:
Confirm that the GKE Enterprise API is disabled in your project:
gcloud services list --project=PROJECT_ID | grep anthos.googleapis.com
Replace PROJECT_ID
with the project ID where your GKE clusters are running.
If the output is an empty response, the GKE Enterprise API is disabled in your project and any Multi Cluster Ingress resources are billed using standalone pricing.
Enable the required APIs in your project:
gcloud services enable \
multiclusteringress.googleapis.com \
gkehub.googleapis.com \
container.googleapis.com \
multiclusterservicediscovery.googleapis.com \
--project=PROJECT_ID
To enable GKE Enterprise pricing, enable the required APIs in your project:
gcloud services enable \
anthos.googleapis.com \
multiclusteringress.googleapis.com \
gkehub.googleapis.com \
container.googleapis.com \
multiclusterservicediscovery.googleapis.com \
--project=PROJECT_ID
After anthos.googleapis.com
is enabled in your project, any clusters registered to Connect are billed according to GKE Enterprise pricing.
Create two GKE clusters named gke-us
and gke-eu
in the europe-west1
and us-central1
regions.
Create the gke-us
cluster in the us-central1
region:
gcloud container clusters create-auto gke-us \
--location=us-central1 \
--release-channel=stable \
--project=PROJECT_ID
Replace PROJECT_ID
with your Google Cloud project ID.
Create the gke-eu
cluster in the europe-west1
region:
gcloud container clusters create-auto gke-eu \
--location=europe-west1 \
--release-channel=stable \
--project=PROJECT_ID
Create the two clusters with Workload Identity Federation for GKE enabled.
Create the gke-us
cluster in the us-central1
region:
gcloud container clusters create gke-us \
--location=us-central1 \
--enable-ip-alias \
--workload-pool=PROJECT_ID.svc.id.goog \
--release-channel=stable \
--project=PROJECT_ID
Replace PROJECT_ID
with your Google Cloud project ID.
Create the gke-eu
cluster in the europe-west1
region:
gcloud container clusters create gke-eu \
--location=europe-west1 \
--enable-ip-alias \
--workload-pool=PROJECT_ID.svc.id.goog \
--release-channel=stable \
--project=PROJECT_ID
Configure credentials for your clusters and rename the cluster contexts to make it easier to switch between clusters when deploying resources.
Retrieve the credentials for your clusters:
gcloud container clusters get-credentials gke-us \
--location=us-central1 \
--project=PROJECT_ID
gcloud container clusters get-credentials gke-eu \
--location=europe-west1 \
--project=PROJECT_ID
The credentials are stored locally so that you can use your kubectl client to access the cluster API servers. By default, an auto-generated name is created for the credentials.
Rename the cluster contexts:
kubectl config rename-context gke_PROJECT_ID_us-central1_gke-us gke-us
kubectl config rename-context gke_PROJECT_ID_europe-west1_gke-eu gke-eu
Register your clusters to your project's fleet as follows.
Note: If you have chosen to enable the entire GKE Enterprise platform, you can also register clusters from the GKE Enterprise pages in the Google Cloud console. Learn more in Register a GKE cluster to your fleet.Register your clusters:
gcloud container fleet memberships register gke-us \
--gke-cluster us-central1/gke-us \
--enable-workload-identity \
--project=PROJECT_ID
gcloud container fleet memberships register gke-eu \
--gke-cluster europe-west1/gke-eu \
--enable-workload-identity \
--project=PROJECT_ID
Confirm that your clusters have successfully been registered to the fleet:
gcloud container fleet memberships list --project=PROJECT_ID
The output is similar to the following:
NAME EXTERNAL_ID
gke-us 0375c958-38af-11ea-abe9-42010a800191
gke-eu d3278b78-38ad-11ea-a846-42010a840114
After you register your clusters, GKE deploys the gke-mcs-importer
Pod to your cluster.
You can learn more about registering clusters in Register a GKE cluster to your fleet.
Specify a config clusterThe config cluster is a GKE cluster you choose to be the central point of control for Ingress across the member clusters. This cluster must already be registered to the fleet. For more information, see Config cluster design.
Note: Even if you have zonal GKE clusters, Multi Cluster Ingress controller is only available in a region. To enable Multi Cluster Ingress, you must specify the region instead of the zone when you use the--location
parameter.
Enable Multi Cluster Ingress and select gke-us
as the config cluster:
gcloud container fleet ingress enable \
--config-membership=gke-us \
--location=us-central1 \
--project=PROJECT_ID
The config cluster takes up to 15 minutes to register. Successful output is similar to the following:
Waiting for Feature to be created...done.
Waiting for controller to start...done.
The unsuccessful output is similar to the following:
Waiting for controller to start...failed.
ERROR: (gcloud.container.fleet.ingress.enable) Controller did not start in 2 minutes. Please use the `describe` command to check Feature state for debugging information.
If a failure occurred in the previous step, then check the feature state:
gcloud container fleet ingress describe \
--project=PROJECT_ID
The successful output is similar to the following:
createTime: '2021-02-04T14:10:25.102919191Z'
membershipStates:
projects/PROJECT_ID/locations/global/memberships/CLUSTER_NAME:
state:
code: ERROR
description: '...is not a VPC-native GKE Cluster.'
updateTime: '2021-08-10T13:58:50.298191306Z'
projects/PROJECT_ID/locations/global/memberships/CLUSTER_NAME:
state:
code: OK
updateTime: '2021-08-10T13:58:08.499505813Z'
To learn more about troubleshooting errors with Multi Cluster Ingress, see Troubleshooting and operations.
Impact on live clustersYou can safely enable Multi Cluster Ingress using gcloud container fleet ingress enable
on a live cluster, as it does not result in any downtime or impact to traffic on the cluster.
You can deploy a MultiClusterIngress
resource for clusters in a Shared VPC network, but all the participating backend GKE clusters must be in the same project. Having GKE clusters in different projects using the same Cloud Load Balancing VIP is not supported.
In non-Shared VPC networks, the Multi Cluster Ingress controller manages firewall rules to allow health checks to pass from the load balancer to container workloads.
In a Shared VPC network, a host project administrator must manually create the firewall rules for load balancer traffic on behalf of the Multi Cluster Ingress controller.
The following command shows the firewall rule that you must create if your clusters are on a Shared VPC network. The source ranges are the ranges that load balancer uses to send traffic to backends. This rule must exist for the operational lifetime of a MultiClusterIngress
resource.
If your clusters are on a Shared VPC network, create the firewall rule:
gcloud compute firewall-rules create FIREWALL_RULE_NAME \
--project=HOST_PROJECT \
--network=SHARED_VPC \
--direction=INGRESS \
--allow=tcp:0-65535 \
--source-ranges=130.211.0.0/22,35.191.0.0/16
Replace the following:
FIREWALL_RULE_NAME
: the name of the new firewall rule that you choose.HOST_PROJECT
: the ID of the Shared VPC host project.SHARED_VPC
: the name of the Shared VPC network.This section describes known issues for the Multi Cluster Ingress
InvalidValueError for field config_membershipA known issue prevents the Google Cloud CLI from interacting with Multi Cluster Ingress. This issue was introduced in version 346.0.0 and was fixed in version 348.0.0. We don't recommend using the gcloud CLI versions 346.0.0 and 347.0.0 with Multi Cluster Ingress.
Invalid value for field 'resource'Google Cloud Armor cannot communicate with Multi Cluster Ingress config clusters running on the following GKE versions:
When you configure a Google Cloud Armor security policy, the following message appears:
Invalid value for field 'resource': '{"securityPolicy": "global/securityPolicies/"}': The given policy does not exist
To avoid this issue, upgrade your config cluster to version 1.21 or later, or use the following command to update the BackendConfig CustomResourceDefinition
:
kubectl patch crd backendconfigs.cloud.google.com --type='json' -p='[{"op": "replace", "path": "/spec/versions/1/schema/openAPIV3Schema/properties/spec/properties/securityPolicy", "value":{"properties": {"name": {"type": "string"}}, "required": ["name" ],"type": "object"}}]'
What's next
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4