A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-services below:

Configuring multi-cluster Services | GKE networking

This page shows you how to enable and use multi-cluster Services (MCS). To learn more about how MCS works and its benefits, see Multi-cluster Services.

The Google Kubernetes Engine (GKE) MCS feature extends the reach of the Kubernetes Service beyond the cluster boundary and lets you discover and invoke Services across multiple GKE clusters. You can export a subset of existing Services or new Services.

When you export a Service with MCS, that Service is then available across all of the clusters in your fleet.

Note: For MCS to function correctly, GKE deploys Pods to your nodes that have elevated RBAC permissions, such as the ability to patch all deployments, services, and endpoints. These permissions are required for GKE to propagate endpoints between multiple GKE clusters.

This page instructs how to configure MCS with a single project. For multi-project Shared VPC set-up, follow Setting up multi-cluster Services with Shared VPC.

Google Cloud resources managed by MCS

MCS manages the following components of Google Cloud:

Requirements

MCS has the following requirements:

Pricing

Multi-cluster Services is included as part of the GKE cluster management fee and has no extra cost for usage. You must enable the Traffic Director API, but MCS does not incur any Cloud Service Mesh endpoint charges. GKE Enterprise licensing is not required to use MCS.

Before you begin

Before you start, make sure you have performed the following tasks:

  1. Install the Google Cloud SDK.

  2. Enable the Google Kubernetes Engine API:

    Enable Google Kubernetes Engine API

  3. Connect your VPC networks with VPC Network Peering, Cloud Interconnect or Cloud VPN.

  4. Enable the MCS, fleet (hub), Resource Manager, Cloud Service Mesh, and Cloud DNS APIs:

    gcloud services enable \
        multiclusterservicediscovery.googleapis.com \
        gkehub.googleapis.com \
        cloudresourcemanager.googleapis.com \
        trafficdirector.googleapis.com \
        dns.googleapis.com \
        --project=PROJECT_ID
    

    Replace PROJECT_ID with the project ID from the project where you plan to register your clusters to a fleet.

    Note: Non-project owners must be granted the serviceusage.services.enable permission before they can enable APIs.
Enabling MCS in your project

MCS requires that participating GKE clusters be registered into the same fleet. Once the MCS feature is enabled for a fleet, any clusters can export Services between clusters in the fleet.

While MCS does require registration to a fleet, it does not require you to enable the GKE Enterprise platform.

GKE Enterprise

If the GKE Enterprise API is enabled in your fleet host project as a prerequisite for using other GKE Enterprise components, then any clusters registered to the project's fleet are charged according to GKE Enterprise pricing. This pricing model lets you use all GKE Enterprise features on registered clusters for a single per-vCPU charge. You can confirm if the GKE Enterprise API is enabled using the following command:

gcloud services list --project=PROJECT_ID | grep anthos.googleapis.com

If the output is similar to the following, the full GKE Enterprise platform is enabled and any clusters registered to the fleet will incur GKE Enterprise charges:

anthos.googleapis.com                        Anthos API

If this is not expected then contact your project administrator.

An empty output indicates that GKE Enterprise is not enabled.

Warning: Do not disable the GKE Enterprise API if there are other active GKE Enterprise components in use in your project or those components might experience an outage. Enabling MCS on your GKE cluster
  1. Enable the MCS feature for your project's fleet:

    gcloud container fleet multi-cluster-services enable \
        --project PROJECT_ID
    

    Replace PROJECT_ID with the project ID from the project where you plan to register your clusters to a fleet. This is your fleet host project.

    Enabling multi-cluster services in the fleet host project creates or ensures that the service-PROJECT_NUMBER@gcp-sa-mcsd.iam.gserviceaccount.com service account exists.

  2. Register your GKE clusters to the fleet. We strongly recommend that you register your cluster with Workload Identity Federation for GKE enabled. If you do not enable Workload Identity Federation for GKE, you need to register the cluster with a Google Cloud service account for authentication, and complete the additional steps in Authenticating service accounts.

    To register your cluster with Workload Identity Federation for GKE, run the following command:

    gcloud container fleet memberships register MEMBERSHIP_NAME \
       --gke-cluster CLUSTER_LOCATION/CLUSTER_NAME \
       --enable-workload-identity \
       --project PROJECT_ID
    

    Replace the following:

  3. Grant the required Identity and Access Management (IAM) permissions for MCS Importer:

    gcloud projects add-iam-policy-binding PROJECT_ID \
        --member "principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/PROJECT_ID.svc.id.goog/subject/ns/gke-mcs/sa/gke-mcs-importer" \
        --role "roles/compute.networkViewer"
    

    Replace the following:

  4. Ensure that each cluster in the fleet has a namespace to share Services in. If needed, create a namespace by using the following command:

    kubectl create ns NAMESPACE
    

    Replace NAMESPACE with a name for the namespace.

    Note: Since Services are not imported to clusters where their exporting namespace is not present, create a Service's namespace in any clusters from which a Service will be consumed. If a namespace is missing, you can add the namespace at any time, but might take up to five minutes for a previously exported Service to be imported to a newly created namespace.
  5. To verify that MCS is enabled, run the following command:

    gcloud container fleet multi-cluster-services describe \
        --project PROJECT_ID
    

    The output is similar to the following:

    createTime: '2021-08-10T13:05:23.512044937Z'
    membershipStates:
      projects/PROJECT_ID/locations/global/memberships/MCS_NAME:
        state:
          code: OK
          description: Firewall successfully updated
          updateTime: '2021-08-10T13:14:45.173444870Z'
    name: projects/PROJECT_NAME/locations/global/features/multiclusterservicediscovery
    resourceState:
      state: ACTIVE
    spec: {}
    

    If the value of state is not ACTIVE, see the troubleshooting section.

Authenticating service accounts Important: You do not need to do any steps in this section if you registered a cluster to a fleet with Workload Identity Federation for GKE enabled.

If you registered your GKE clusters to a fleet using a service account, you need to take additional steps to authenticate the service account. MCS deploys a component called gke-mcs-importer. This component receives endpoint updates from Cloud Service Mesh, so as part of enabling MCS you need to grant your service account permission to read information from Cloud Service Mesh.

When you use a service account, you can use the Compute Engine default service account or your own node service account:

Using MCS

The following sections show you how to use MCS. MCS uses the Kubernetes multi-cluster services API.

Registering a Service for export

To register a Service for export to other clusters within your fleet, complete the following steps:

  1. Create a ServiceExport object named export.yaml:

    # export.yaml
    kind: ServiceExport
    apiVersion: net.gke.io/v1
    metadata:
     namespace: NAMESPACE
     name: SERVICE_EXPORT_NAME
    

    Replace the following:

  2. Create the ServiceExport resource by running the following command:

    kubectl apply -f export.yaml
    

The initial export of your Service takes approximately five minutes to sync to clusters registered in your fleet. After a Service is exported, subsequent endpoint syncs happen immediately.

You can export the same Service from multiple clusters to create a single highly available multi-cluster service endpoint with traffic distribution across clusters. Before you export Services that have the same name and namespace, ensure that you want them to be grouped in this manner. We recommend against exporting services in the default and kube-system namespaces because of the high probability of unintended name conflicts and the resulting unintended grouping. If you are exporting more than five services with the same name and namespace, traffic distribution on imported services might be limited to five exported services.

Consuming cross-cluster Services

MCS only supports ClusterSetIP and headless Services. Only DNS "A" records are available.

After you create a ServiceExport object, the following domain name resolves to your exported Service from any Pod in any fleet cluster:

 SERVICE_EXPORT_NAME.NAMESPACE.svc.clusterset.local

The output includes the following values:

For ClusterSetIP Services, the domain resolves to the ClusterSetIP. You can find this value by locating the ServiceImport object in a cluster in the namespace that the ServiceExport object was created in. The ServiceImport object is automatically created.

For example:

kind: ServiceImport
apiVersion: net.gke.io/v1
metadata:
 namespace: EXPORTED-SERVICE-NAMESPACE
 name: SERVICE-EXPORT-TARGET
status:
 ports:
 - name: https
   port: 443
   protocol: TCP
   targetPort: 443
 ips: CLUSTER_SET_IP

MCS creates an Endpoints object as part of importing a Service into a cluster. By investigating this object you can monitor the progress of a Service import. To find the name of the Endpoints object, look up the value of the annotation net.gke.io/derived-service on a ServiceImport object corresponding to your imported Service. For example:

kind: ServiceImport
apiVersion: net.gke.io/v1
annotations: net.gke.io/derived-service: DERIVED_SERVICE_NAME
metadata:
 namespace: EXPORTED-SERVICE-NAMESPACE
 name: SERVICE-EXPORT-TARGET

Next, look up the Endpoints object to check if MCS has already propagated the endpoints to the importing cluster. The Endpoints object is created in the same namespace as the ServiceImport object, under the name stored in the net.gke.io/derived-service annotation. For example:

kubectl get endpoints DERIVED_SERVICE_NAME -n NAMESPACE

Replace the following:

You can find out more on the healthiness status of the endpoints using the Cloud Service Mesh dashboard in Google Cloud console.

For headless Services, the domain resolves to the list of IP addresses of the endpoints in the exporting clusters. Each backend Pod with a hostname is also independently addressable with a domain name of the following form:

 HOSTNAME.MEMBERSHIP_NAME.LOCATION.SERVICE_EXPORT_NAME.NAMESPACE.svc.clusterset.local

The output includes the following values:

You can also address a backend Pod with a hostname exported from a cluster registered with a global Membership, using a domain name of the following format:

HOSTNAME.MEMBERSHIP_NAME.SERVICE_EXPORT_NAME.NAMESPACE.svc.clusterset.local
Disabling MCS

To disable MCS, complete the following steps:

  1. For each cluster in your fleet, delete each ServiceExport object that you created:

    kubectl delete serviceexport SERVICE_EXPORT_NAME \
        -n NAMESPACE
    

    Replace the following:

  2. Verify that the ServiceExport disappears from the list in 30 minutes.

  3. Unregister your clusters from the fleet if they don't need to be registered for another purpose.

  4. Disable the multiclusterservicediscovery feature:

    gcloud container fleet multi-cluster-services disable \
        --project PROJECT_ID
    

    Replace PROJECT_ID with the project ID from the project where you registered clusters.

  5. Disable the API for MCS:

    gcloud services disable multiclusterservicediscovery.googleapis.com \
        --project PROJECT_ID
    

    Replace PROJECT_ID with the project ID from the project where you registered clusters.

Limitations Number of clusters, Pods, and Service ports

The following limits are not enforced, and in some cases you can exceed these limits depending on the load in your clusters or project, and the rate of endpoint churn. You might experience performance issues when these limits are exceeded.

In Kubernetes, a Service is uniquely identified by its name and the namespace it belongs to. This name and the namespace pair is called a namespaced name.

Service types

MCS only supports ClusterSetIP and Headless Services. NodePort and LoadBalancer Services are not supported and might lead to an unexpected behaviour.

Using IPmasq Agent with MCS

MCS operates as expected when you use a default or other non masqueraded Pod IP range.

If you use a custom Pod IP range or a custom IPmasq agent ConfigMap, MCS traffic can be masqueraded. This prevents MCS from working because the firewall rules only allow traffic from Pod IPs.

To avoid this issue, you should either use the default Pod IP range or specify all Pod IP ranges in the nonMasqueradeCIDRs field of the IPmasq agent ConfigMap. If you use Autopilot or you must use a non-default Pod IP range and cannot specify all Pod IP ranges in the ConfigMap, you should use Egress NAT Policy to configure IP masquerade.

Reusing port numbers within an MCS Service

You can't reuse the same port number within one MCS Service even if the protocols are different.

This applies both within one Kubernetes Service and across all Kubernetes Services for one MCS Service.

MCS with clusters in multiple projects

You cannot export a service if that service is already being exported by other clusters in a different project in the fleet with the same name and namespace. You can access the service in other clusters in the fleet in other projects, but those clusters cannot export the same service in the same namespace.

Troubleshooting

The following sections provide you with troubleshooting tips for MCS.

Viewing MCS Feature status

Viewing the status of the Feature resource can help you confirm if MCS was configured successfully. You can view the status with the following command:

gcloud container fleet multi-cluster-services describe

The output is similar to the following:

createTime: '2021-08-10T13:05:23.512044937Z'
membershipStates:
 projects/PROJECT_ID/locations/global/memberships/MCS_NAME:
   state:
     code: OK
     description: Firewall successfully updated
     updateTime: '2021-08-10T13:14:45.173444870Z'
name: projects/PROJECT_NAME/locations/global/features/multiclusterservicediscovery
resourceState:
 state: ACTIVE
spec: {}

It consists, among other information, of the overall Feature state under resourceState and of individual memberships' status, under membershipStates.

If you enabled the MCS feature according to the Enabling MCS on your GKE cluster instruction but the value of the resourceState.state is not ACTIVE, contact the support.

Status of each membership consists of its path and the state field. The latter contains code and description which are helpful for troubleshooting.

Codes in the membership states

A code, represented by the state.code field, indicates the member's general state in relation to MCS. There are four possible values:

Descriptions in the membership states

To see information about the membership's state in MCS, check the state.description field. This field provides information about the project and hub configuration and the fleet and memberships health status. To view information about individual Services and their configuration, consult the Status.Conditions field in the ServiceExport object, see the Examining ServiceExport section.

The state.description field contains the following information:

Examining ServiceExport

To view the status of an individual Service and potential errors, check the Status.Conditions field in the ServiceExport resource for that Service:

kubectl describe serviceexports PROJECT_ID -n NAMESPACE

The output is similar to the following:

Name:         SERVICE_NAME
Namespace:    NAMESPACE
Labels:       <none>
Annotations:  <none>
API Version:  net.gke.io/v1
Kind:         ServiceExport
Metadata:
  Creation Timestamp:  2024-09-06T15:57:40Z
  Finalizers:
    serviceexport.net.gke.io
  Generation:        2
  Resource Version:  856599
  UID:               8ac44d88-4c08-4b3d-8524-976efc455e4e
Status:
  Conditions:
    Last Transition Time:  2024-09-06T16:01:53Z
    Status:                True
    Type:                  Initialized
    Last Transition Time:  2024-09-06T16:02:48Z
    Status:                True
    Type:                  Exported
Events:                    <none>

When the MCS controller notices a ServiceExport resource, the controller adds the following conditions to the Status.Conditions field:

Each condition contains mandatory Type, Status, and LastTransitionTime fields. As the MCS controller reconciles and validates the Service, the Status field for the corresponding condition changes from False to True.

Errors

If an error occurs with the validation, the controller sets the Status field of the Exported condition to False and adds a Reason field and a Message field with more information about the error. The Reason field can have one of the following values:

The Message field provides additional context for the error.

Common permission issues Known issues MCS Services with multiple ports

There is a known issue with multi-cluster Services with multiple (TCP/UDP) ports on GKE Dataplane V2 where some endpoints are not programmed in the dataplane. This issue impacts GKE versions earlier than 1.26.3-gke.400.

As a workaround, when using GKE Dataplane V2, use multiple MCS with a single port instead of one MCS with multiple ports.

Port number reused within one MCS Service

You can't reuse the same port number within an MCS Service even if the protocols are different.

This applies both within one Kubernetes Service and across all Kubernetes Services for one MCS Service.

With the current implementation of MCS, if you deploy more than one fleet in the same Shared VPC, metadata are shared between fleets. When a Service is created in one fleet, Service metadata is exported or imported in all other fleets that are part of the same Shared VPC and visible to the user.

Health check uses default port instead of containerPort

When you deploy a Service with a targetPort field referencing a named port in a Deployment, MCS configures the default port for the health check instead of the specified containerPort.

To avoid this issue, use numerical values in the Service field ports.targetPort and the Deployment field readinessProbe.httpGet.port instead of named values.

Invalid readiness probe for a single Service breaks other Services

There is a known potential readinessProbe configuration error described in Examining ServiceExports. With the current implementation of MCS, this error, if introduced for a single exported Service, can prevent some or all other Services in the fleet from getting synchronized.

It is important to keep configurations for every Service healthy. If an MCS Service is not getting updated, make sure that none of the ServiceExports in any of the clusters in any of the namespaces reports ReadinessProbeError as the reason for Status Condition Exported being False. See Examining ServiceExports to learn how to check the conditions.

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4