A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/how-to/configure-gateway-resources below:

Configure Gateway resources using Policies | GKE networking

This page shows you how to configure the load balancer that Google Kubernetes Engine (GKE) creates when you deploy a Gateway in a GKE cluster.

When you deploy a Gateway, the GatewayClass configuration determines which load balancer GKE creates. This managed load balancer is pre-configured with default settings that you can modify using a Policy.

You can customize Gateway resources to fit with your infrastructure or application requirements by attaching Policies to Gateways, Services or ServiceImports. After you apply or modify a Policy, you don't need to delete or recreate your Gateway, Route, or Service resources, the Policy is processed by the Gateway controller and the underlying load balancer resource is (re)configured according to the (new) Policy.

Before you begin

Before you start, make sure that you have performed the following tasks:

GKE Gateway controller requirements Restrictions and Limitations

In addition to the GKE Gateway controller restrictions and limitations, the following limitations apply specifically to Policies applied on the Gateway resources:

Configure global access for your regional internal Gateway

This section describes a functionality that is available on GKE clusters running version 1.24 or later.

To enable global access with your internal Gateway, attach a policy to the Gateway resource.

The following GCPGatewayPolicy manifest enables regional internal Gateway for global access:

apiVersion: networking.gke.io/v1
kind: GCPGatewayPolicy
metadata:
  name: my-gateway-policy
  namespace: default
spec:
  default:
    allowGlobalAccess: true
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: my-gateway
Note: Upgrading an existing internal Gateway by adding global access recreates the forwarding rule of your regional internal load balancer. This upgrade deletes and then re-creates Google Cloud load balancers which results in up to 15 minutes of unavailability. We recommend performing this operation during a maintenance window to avoid any application downtime. Configure the region for your multi-cluster Gateway

This section describes a functionality that is available on GKE clusters running version 1.30.3-gke.1225000 or later.

If your fleet has clusters across multiple regions, you might need to deploy regional Gateways in different regions for a variety of use cases, for example, cross-region redundancy, low latency and data sovereignty. In your multi-cluster Gateway config cluster, you can specify the region in which you want to deploy the regional Gateways. If you don't specify a region, the default region is the config cluster's region.

To configure a region for your multi-cluster Gateway, use the region field in the GCPGatewayPolicy. In the following example, the Gateway is configured in the us-central1 region:

apiVersion: networking.gke.io/v1
kind: GCPGatewayPolicy
metadata:
  name: my-gateway-policy
  namespace: default
spec:
  default:
    region: us-central1
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: my-regional-gateway
Note: If you update the region in an existing regional multi-cluster Gateway, the underlying load balancer is deleted in the old region and recreated in the new region. Configure SSL Policies to secure client-to-load-balancer traffic

This section describes a functionality that is available on GKE clusters running version 1.24 or later.

To secure your client-to-load-balancer traffic, configure the SSL policy by adding the name of your policy to the GCPGatewayPolicy. By default, the Gateway does not have any SSL Policy defined and attached.

Make sure that you create an SSL policy prior to referencing the policy in your GCPGatewayPolicy.

Note: If you use a regional Gateway, make sure you create a regional SSL policy.

The following GCPGatewayPolicy manifest specifies a security policy named gke-gateway-ssl-policy:

apiVersion: networking.gke.io/v1
kind: GCPGatewayPolicy
metadata:
  name: my-gateway-policy
  namespace: team1
spec:
  default:
    sslPolicy: gke-gateway-ssl-policy
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: my-gateway
Configure health checks

This section describes a functionality that is available on GKE clusters running version 1.24 or later.

By default, for backend services that use the HTTP or kubernetes.io/h2c application protocols, the HealthCheck is of the HTTP type. For the HTTPS protocol, the default HealthCheck is of the HTTPS type. For the HTTP2 protocol, the default HealthCheck is of the HTTP2 type.

You can use a HealthCheckPolicy to control the load balancer health check settings. Each type of health check (http, https, grpc, and http2) has parameters that you can define. Google Cloud creates a unique health check for each backend service for each GKE Service.

For your load balancer to function normally, you might need to configure a custom HealthCheckPolicy for your load balancer if your health check path isn't the standard "/". This configuration is also necessary if the path requires special headers or if you need to adjust the health check parameters. For example, if the default request path is "/" but your service can't be accessed at that request path and instead uses "/health" to report its health, then you must configure requestPath in your HealthCheckPolicy accordingly.

Note: To enable the Gateway controller to process health checks and update parameters, the Service that you define in the health check policy must also be referenced in the HTTPRoute manifest.

The following HealthCheckPolicy manifest shows all the fields available when configuring a health check policy:

Service
apiVersion: networking.gke.io/v1
kind: HealthCheckPolicy
metadata:
  name: lb-healthcheck
  namespace: lb-service-namespace
spec:
  default:
    checkIntervalSec: INTERVAL
    timeoutSec: TIMEOUT
    healthyThreshold: HEALTHY_THRESHOLD
    unhealthyThreshold: UNHEALTHY_THRESHOLD
    logConfig:
      enabled: ENABLED
    config:
      type: PROTOCOL
      httpHealthCheck:
        portSpecification: PORT_SPECIFICATION
        port: PORT
        host: HOST
        requestPath: REQUEST_PATH
        response: RESPONSE
        proxyHeader: PROXY_HEADER
      httpsHealthCheck:
        portSpecification: PORT_SPECIFICATION
        port: PORT
        host: HOST
        requestPath: REQUEST_PATH
        response: RESPONSE
        proxyHeader: PROXY_HEADER
      grpcHealthCheck:
        grpcServiceName: GRPC_SERVICE_NAME
        portSpecification: PORT_SPECIFICATION
        port: PORT
      http2HealthCheck:
        portSpecification: PORT_SPECIFICATION
        port: PORT
        host: HOST
        requestPath: REQUEST_PATH
        response: RESPONSE
        proxyHeader: PROXY_HEADER
  targetRef:
    group: ""
    kind: Service
    name: lb-service
Multi-cluster Service
apiVersion: networking.gke.io/v1
kind: HealthCheckPolicy
metadata:
  name: lb-healthcheck
  namespace: lb-service-namespace
spec:
  default:
    checkIntervalSec: INTERVAL
    timeoutSec: TIMEOUT
    healthyThreshold: HEALTHY_THRESHOLD
    unhealthyThreshold: UNHEALTHY_THRESHOLD
    logConfig:
      enabled: ENABLED
    config:
      type: PROTOCOL
      httpHealthCheck:
        portSpecification: PORT_SPECIFICATION
        port: PORT
        host: HOST
        requestPath: REQUEST_PATH
        response: RESPONSE
        proxyHeader: PROXY_HEADER
      httpsHealthCheck:
        portSpecification: PORT_SPECIFICATION
        port: PORT
        host: HOST
        requestPath: REQUEST_PATH
        response: RESPONSE
        proxyHeader: PROXY_HEADER
      grpcHealthCheck:
        grpcServiceName: GRPC_SERVICE_NAME
        portSpecification: PORT_SPECIFICATION
        port: PORT
      http2HealthCheck:
        portSpecification: PORT_SPECIFICATION
        port: PORT
        host: HOST
        requestPath: REQUEST_PATH
        response: RESPONSE
        proxyHeader: PROXY_HEADER
  targetRef:
    group: net.gke.io
    kind: ServiceImport
    name: lb-service

Replace the following:

For more information about HealthCheckPolicy fields, see the healthChecks reference.

Configure Cloud Armor backend security policy to secure your backend services

This section describes a functionality that is available on GKE clusters running version 1.24 or later.

Note: Google Cloud has replaced the LbPolicy policy with the GCPBackendPolicy policy. The LbPolicy is still supported, but with no additional attributes.

Configure the Cloud Armor backend security policy by adding the name of your security policy to the GCPBackendPolicy to secure your backend services. By default, the Gateway does not have any Cloud Armor backend security policy defined and attached.

Make sure that you create a Cloud Armor backend security policy prior to referencing the policy in your GCPBackendPolicy. If you are enabling a regional Gateway, then you must create a regional Cloud Armor backend security policy.

The following GCPBackendPolicy manifest specifies a backend security policy named example-security-policy:

Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    securityPolicy: example-security-policy
  targetRef:
    group: ""
    kind: Service
    name: lb-service
Multi-cluster Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    securityPolicy: example-security-policy
  targetRef:
    group: net.gke.io
    kind: ServiceImport
    name: lb-service
Configure IAP

This section describes a functionality that is available on GKE clusters running version 1.24 or later.

Identity-Aware Proxy (IAP) enforces access control policies on backend services associated with an HTTPRoute. With this enforcement, only authenticated users or applications with the correct Identity and Access Management (IAM) role assigned can access these backend services.

By default, there is no IAP applied to your backend services, you need to explicitly configure IAP in a GCPBackendPolicy.

To configure IAP with Gateway, do the following:

  1. Enable IAP for GKE Do not configure the backend (Configuring BackendConfig) because BackendConfig is only valid in the case of an Ingress deployment.

  2. Create a secret for your IAP:

    1. In the Google Cloud console, go to the Credentials page:

      Go to Credentials

    2. Click the name of the client and download the OAuth client file.

    3. From the OAuth client file, copy the OAuth secret on the clipboard.

    4. Create a file called iap-secret.txt.

    5. Paste the OAuth secret in the iap-secret.txt file using the following command:

      echo -n CLIENT_SECRET > iap-secret.txt
      kubectl create secret generic SECRET_NAME --from-file=key=iap-secret.txt
      
  3. To specify IAP policy referencing a secret:

    1. Create the following GCPBackendPolicy manifest, replace the SECRET_NAME and CLIENT_ID respectively. Save the manifest as backend-policy.yaml:

      Service
      apiVersion: networking.gke.io/v1
      kind: GCPBackendPolicy
      metadata:
        name: backend-policy
      spec:
        default:
          iap:
            enabled: true
            oauth2ClientSecret:
              name: SECRET_NAME
            clientID: CLIENT_ID
        targetRef:
          group: ""
          kind: Service
          name: lb-service
      
      Multi-cluster Service
      apiVersion: networking.gke.io/v1
      kind: GCPBackendPolicy
      metadata:
        name: backend-policy
      spec:
        default:
          iap:
            enabled: true
            oauth2ClientSecret:
              name: SECRET_NAME
            clientID: CLIENT_ID
        targetRef:
          group: net.gke.io
          kind: ServiceImport
          name: lb-service
      
    2. Apply the backend-policy.yaml manifest:

      kubectl apply -f backend-policy.yaml
      
  4. Verify your configuration:

    1. Confirm that the policy was applied after creating your GCPBackendPolicy with IAP:

      kubectl get gcpbackendpolicy
      

      The output is similar to the following:

      NAME             AGE
      backend-policy   45m
      
    2. To get more details, use the describe command:

      kubectl describe gcpbackendpolicy
      

      The output is similar to the following:

      Name:         backend-policy
      Namespace:    default
      Labels:       <none>
      Annotations:  <none>
      API Version:  networking.gke.io/v1
      Kind:         GCPBackendPolicy
      Metadata:
        Creation Timestamp:  2023-05-27T06:45:32Z
        Generation:          2
        Resource Version:    19780077
        UID:                 f4f60a3b-4bb2-4e12-8748-d3b310d9c8e5
      Spec:
        Default:
          Iap:
            Client ID:  441323991697-luotsrnpboij65ebfr13hlcpm5a4heke.apps.googleusercontent.com
            Enabled:    true
            oauth2ClientSecret:
              Name:  my-iap-secret
        Target Ref:
          Group:
          Kind:   Service
          Name:   lb-service
      Status:
        Conditions:
          Last Transition Time:  2023-05-27T06:48:25Z
          Message:
          Reason:                Attached
          Status:                True
          Type:                  Attached
      Events:
        Type     Reason  Age                 From                   Message
        ----     ------  ----                ----                   -------
        Normal   ADD     46m                 sc-gateway-controller  default/backend-policy
        Normal   SYNC    44s (x15 over 43m)  sc-gateway-controller  Application of GCPBackendPolicy "default/backend-policy" was a success
      
Configure backend service timeout

This section describes a functionality that is available on GKE clusters running version 1.24 or later.

Note: Google Cloud has replaced the LbPolicy policy with the GCPBackendPolicy policy. The LbPolicy is still supported, but with no additional attributes.

The following GCPBackendPolicy manifest specifies a backend service timeout period of 40 seconds. The timeoutSec field defaults to 30 seconds.

Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    timeoutSec: 40
  targetRef:
    group: ""
    kind: Service
    name: lb-service
Multi-cluster Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    timeoutSec: 40
  targetRef:
    group: net.gke.io
    kind: ServiceImport
    name: lb-service
Configure backend selection using GCPBackendPolicy

The CUSTOM_METRICS balancing mode within the GCPBackendPolicy lets you configure specific custom metrics that influence how backend services of load balancers distribute traffic. This balancing mode enables load balancing based on custom metrics that you define, and that are reported by the application backends.

Note: The CUSTOM_METRICS balancing mode is available only on GKE version 1.33 and later.

For more information, see Traffic management with custom metrics-based load balancing.

The customMetrics[] array, in the backends[] field, contains the following fields:

Example

The following example shows a GCPBackendPolicy manifest that configures custom metrics for backend selection and endpoint-level routing.

  1. Save the following manifest as my-backend-policy.yaml:

    kind: GCPBackendPolicy
    apiVersion: networking.gke.io/v1
    metadata:
    name: my-backend-policy
    namespace: team-awesome
    spec:
    targetRef:
      kind: Service
      name: super-service
    default:
      backends:
      - location: "*"
        balancingMode: RATE
        maxRatePerEndpoint: 9000
      - location: us-central1-a  # specific block applies to given zone / region only.
        # maxRatePerEndpoint: 9000 inherited from unnamed
        balancingMode: CUSTOM_METRICS
        customMetrics:
        - name: gpu-load
          maxUtilization: 100 # value ranges from 0 to 100 and maps to the floating pointrange [0.0, 1.0]
          dryRun: false
    
  2. Apply the manifest to your cluster:

    kubectl apply -f my-backend-policy.yaml
    

The load balancer distributes traffic based on the RATE balancing mode and the custom gpu-load metric.

Configure endpoint level routing with GCPTrafficDistributionPolicy

The GCPTrafficDistributionPolicy configures the load balancing algorithm for endpoint picking within a backend. When you select WEIGHTED_ROUND_ROBIN, the load balancer uses weights derived from reported metrics (including custom metrics) to distribute traffic to individual instances or endpoints.

The WEIGHTED_ROUND_ROBIN localityLbPolicy field of the GCPTrafficDistributionPolicy resource specifies a load balancing algorithm for selecting individual instances or endpoints within a backend. When you use this algorithm, the policy utilizes custom metrics to compute weights for load assignment.

The customMetrics[] array within the GCPTrafficDistributionPolicy configuration contains the following fields:

Note: Custom metrics defined within GCPTrafficDistributionPolicy apply only to the WEIGHTED_ROUND_ROBIN localityLbPolicy and are not inherited from backends[].customMetrics[] in GCPBackendPolicy.

For more information, see Traffic management with custom metrics-based load balancing.

Example

The following example shows a GCPTrafficDistributionPolicy manifest that configures endpoint-level routing by using both the WEIGHTED_ROUND_ROBIN load balancing algorithm and custom metrics.

  1. Save the following sample manifest as GCPTrafficDistributionPolicy.yaml:

    apiVersion: networking.gke.io/v1
    kind: GCPTrafficDistributionPolicy
    metadata:
      name: echoserver-v2
      namespace: team1
    spec:
      targetRefs:
      - kind: Service
        group: ""
        name: echoserver-v2
      default:
        localityLbAlgorithm: WEIGHTED_ROUND_ROBIN
        customMetrics:
        - name: orca.named_metrics.bescm11
          dryRun: false
        - name: orca.named_metrics.bescm12
          dryRun: true
    
  2. Apply the manifest to your cluster:

    kubectl apply -f GCPTrafficDistributionPolicy.yaml
    

The load balancer distributes traffic to endpoints based on the WEIGHTED_ROUND_ROBIN algorithm, and by using the provided custom metrics.

Configure session affinity

This section describes a functionality that is available on GKE clusters running version 1.24 or later.

Note: Google Cloud has replaced the LbPolicy policy with the GCPBackendPolicy policy. The LbPolicy is still supported, but with no additional attributes.

You can configure session affinity based on the following criteria:

When you configure session affinity for your Service, the Gateway's localityLbPolicy setting is set to MAGLEV.

When you remove a session affinity configuration from the GCPBackendPolicy, the Gateway reverts the localityLbPolicy setting to the default value, ROUND_ROBIN.

Warning: This value is silently set on the GKE-managed backend service (attached to the load balancer) and is not reflected in the output of a gcloud CLI command, in the UI, or with Terraform.

The following GCPBackendPolicy manifest specifies a session affinity based on the client IP address:

Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    sessionAffinity:
      type: CLIENT_IP
  targetRef:
    group: ""
    kind: Service
    name: lb-service
Multi-cluster Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    sessionAffinity:
      type: CLIENT_IP
  targetRef:
    group: net.gke.io
    kind: ServiceImport
    name: lb-service

The following GCPBackendPolicy manifest specifies a session affinity based on a generated cookie and configures the cookies TTL to 50 seconds:

Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    sessionAffinity:
      type: GENERATED_COOKIE
      cookieTtlSec: 50
  targetRef:
    group: ""
    kind: Service
    name: lb-service
Multi-cluster Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    sessionAffinity:
      type: GENERATED_COOKIE
      cookieTtlSec: 50
  targetRef:
    group: net.gke.io
    kind: ServiceImport
    name: lb-service

You can use the following values for the sessionAffinity.type field:

Configure connection draining timeout

This section describes a functionality that is available on GKE clusters running version 1.24 or later.

Note: Google Cloud has replaced the LbPolicy policy with the GCPBackendPolicy policy. The LbPolicy is still supported, but with no additional attributes.

You can configure connection draining timeout using GCPBackendPolicy. Connection draining timeout is the time, in seconds, to wait for connections to drain. The timeout duration can be from 0 to 3600 seconds. The default value is 0, which also disables connection draining.

The following GCPBackendPolicy manifest specifies a connection draining timeout of 60 seconds:

Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    connectionDraining:
      drainingTimeoutSec: 60
  targetRef:
    group: ""
    kind: Service
    name: lb-service
Multi-cluster Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    connectionDraining:
      drainingTimeoutSec: 60
  targetRef:
    group: net.gke.io
    kind: ServiceImport
    name: lb-service

For the specified duration of the timeout, GKE waits for existing requests to the removed backend to complete. The load balancer does not send new requests to the removed backend. After the timeout duration is reached, GKE closes all remaining connections to the backend.

HTTP access logging

This section describes a functionality that is available on GKE clusters running version 1.24 or later.

Note: Google Cloud has replaced the LbPolicy policy with the GCPBackendPolicy policy. The LbPolicy is still supported, but with no additional attributes.

By default:

You can disable access logging on your Gateway using a GCPBackendPolicy in three ways:

You can also configure the access logging sampling rate and a list of optional fields, for example "tls.cipher" or "orca_load_report".

To enable logging of the optional fields:

You can disable logging of the optional fields in two ways:

The following GCPBackendPolicy manifest modifies access logging's default sample rate and sets it to 50% of the HTTP requests. The manifest also enables logging of two optional fields for a given Service resource:

Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    logging:
      enabled: true
      sampleRate: 500000
      optionalMode: CUSTOM
      optionalFields:
      - tls.cipher
      - orca_load_report.cpu_utilization
  targetRef:
    group: ""
    kind: Service
    name: lb-service
Multi-cluster Service
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: my-backend-policy
  namespace: lb-service-namespace
spec:
  default:
    logging:
      enabled: true
      sampleRate: 500000
      optionalMode: CUSTOM
      optionalFields:
      - tls.cipher
      - orca_load_report.cpu_utilization
  targetRef:
    group: net.gke.io
    kind: ServiceImport
    name: lb-service

This manifest has the following fields:

Configure traffic-based autoscaling for your single-cluster Gateway

Ensure your GKE cluster is running the version 1.31.1-gke.2008000 or later.

To enable traffic-based autoscaling and capacity-based load balancing in a single-cluster Gateway, you can configure Service capacity. Service capacity is the ability to specify the amount of traffic capacity that a Service can receive before Pods are autoscaled or traffic overflows to other available clusters.

To configure Service capacity, create a Service and an associated GCPBackendPolicy. The GCPBackendPolicy manifest uses the field maxRatePerEndpoint which defines a maximum Requests per Second (RPS) value per Pod in a Service. The following GCPBackendPolicy manifest defines a maximum RPS of 10:

apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: store
spec:
  default:
    maxRatePerEndpoint: 10
  targetRef:
    group: ""
    kind: Service
    name: store

To learn more about traffic-based autoscaling, see Autoscaling based on load balancer traffic.

Troubleshooting Multiple GCPBackendPolicy attached to the same Service

Symptom:

The following status condition might occur when you attach a GCPBackendPolicy to a Service or a ServiceImport:

status:
  conditions:
    - lastTransitionTime: "2023-09-26T20:18:03Z"
      message: conflicted with GCPBackendPolicy "[POLICY_NAME]" of higher precedence, hence not applied
      reason: Conflicted
      status: "False"
      type: Attached

Reason:

This status condition indicates that you are trying to apply a second GCPBackendPolicy to a Service or ServiceImport that already has a GCPBackendPolicy attached.

Multiple GCPBackendPolicy attached to the same Service or ServiceImport is not supported with GKE Gateway. See Restrictions and Limitations for more details.

Workaround:

Configure a single GCPBackendPolicy that includes all custom configurations and attach it to your Service or ServiceImport.

Cloud Armor security policy not found

Symptom:

The following error message might appear when you enable Cloud Armor on your regional Gateway:

Invalid value for field 'resource': '{
"securityPolicy":"projects/123456789/regions/us-central1/securityPolicies/<policy_name>"}'.
The given security policy does not exist.

Reason:

The error message indicates that the specified regional Cloud Armor security policy does not exist in your Google Cloud project.

Workaround:

Create a regional Cloud Armor security policy in your project and reference this policy in your GCPBackendPolicy.

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4