A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://cloud.google.com/kubernetes-engine/docs/how-to/backend-service-based-external-load-balancer below:

Create a backend service-based external load balancer | GKE networking

This page shows you how to deploy an external LoadBalancer Service that builds a backend service-based external passthrough Network Load Balancer. Before reading this page, ensure that you're familiar with the following:

To learn more about external passthrough Network Load Balancers in general, see Backend service-based external passthrough Network Load Balancer.

Before you begin

Before you start, make sure that you have performed the following tasks:

Requirements Choose a cluster

You can create a new cluster or choose an existing cluster that meets the requirements.

Create a new cluster

Autopilot

To create a new Autopilot cluster:

gcloud container clusters create-auto CLUSTER_NAME \
    --release-channel=RELEASE_CHANNEL \
    --cluster-version=VERSION \
    --location=COMPUTE_LOCATION

Replace the following:

To disable automatic VPC firewall rule creation for LoadBalancer Services, include the --disable-l4-lb-firewall-reconciliation flag. For more information, see User-managed firewall rules for GKE LoadBalancer Services.

Standard

To create a new Standard cluster:

gcloud container clusters create CLUSTER_NAME \
    --release-channel=RELEASE_CHANNEL \
    --cluster-version=VERSION \
    --location=COMPUTE_LOCATION

Replace the following:

To disable automatic VPC firewall rule creation for LoadBalancer Services, include the --disable-l4-lb-firewall-reconciliation flag. For more information, see User-managed firewall rules for GKE LoadBalancer Services.

Upgrade an existing cluster

Use the gcloud CLI to update an existing cluster:

gcloud container clusters upgrade CLUSTER_NAME \
    --cluster-version=VERSION \
    --master \
    --location=COMPUTE_LOCATION

Replace the following:

To disable automatic VPC firewall rule creation for LoadBalancer Services, include the --disable-l4-lb-firewall-reconciliation flag. For more information, see User-managed firewall rules for GKE LoadBalancer Services.

Deploy a sample workload

Deploy the following sample workload which provides the serving Pods for the external LoadBalancer Service.

  1. Save the following sample Deployment as store-deployment.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: store
    spec:
      replicas: 20
      selector:
        matchLabels:
          app: store
      template:
        metadata:
          labels:
            app: store
        spec:
          containers:
          - image: gcr.io/google_containers/echoserver:1.10
            imagePullPolicy: Always
            name: echoserver
            ports:
              - name: http
                containerPort: 8080
            readinessProbe:
              httpGet:
                path: /healthz
                port: 8080
                scheme: HTTP
    
  2. Apply the manifest to the cluster:

    kubectl apply -f store-deployment.yaml
    
  3. Verify that there are 20 serving Pods for the Deployment:

    kubectl get pods
    

    The output is similar to the following:

    NAME                     READY   STATUS    RESTARTS   AGE
    store-cdb9bb4d6-s25vw      1/1     Running   0          10s
    store-cdb9bb4d6-vck6s      1/1     Running   0          10s
    ....
    
Create the external LoadBalancer Service
  1. Expose the sample workload by creating an external LoadBalancer Service.

    1. Save the following Service manifest as store-v1-lb-svc.yaml:

      apiVersion: v1
      kind: Service
      metadata:
        name: store-v1-lb-svc
        annotations:
          cloud.google.com/l4-rbs: "enabled"
      spec:
        type: LoadBalancer
        selector:
          app: store
        ports:
        - name: tcp-port
          protocol: TCP
          port: 8080
          targetPort: 8080
      
    2. Apply the manifest to the cluster:

      kubectl apply -f store-v1-lb-svc.yaml
      

    Note the following points about this sample manifest:

Enable weighted load balancing

To distribute new connections proportionally to nodes based on how many serving, ready, and non-terminating Pods are present on each node, enable weighted load balancing by adding the networking.gke.io/weighted-load-balancing: "pods-per-node" annotation to the Service manifest.

  1. Add the networking.gke.io/weighted-load-balancing: "pods-per-node" annotation to the store-v1-lb-svc.yaml Service manifest, and ensure that you also set the externalTrafficPolicy: Local so that it looks like this:

    apiVersion: v1
    kind: Service
    metadata:
      name: store-v1-lb-svc
      annotations:
        cloud.google.com/l4-rbs: "enabled"
        networking.gke.io/weighted-load-balancing: "pods-per-node"
    spec:
      type: LoadBalancer
      externalTrafficPolicy: Local
      selector:
        app: store
      ports:
      - name: tcp-port
        protocol: TCP
        port: 8080
        targetPort: 8080
    
  2. Apply the manifest to the cluster:

    kubectl apply -f store-v1-lb-svc.yaml
    

Note the following about this example on weighted load balancing:

You can also enable weighted load balancing on an existing external LoadBalancer Service by using kubectl edit svc service-name. The kubectl edit command opens the existing load balancer's Service manifest in your configured text editor, where you can modify the manifest and save changes. When you edit an existing external LoadBalancer Service, note the following points:

Disable weighted load balancing

To distribute new connections to nodes regardless of how many serving Pods are present on each node, disable weighted load balancing by removing the networking.gke.io/weighted-load-balancing: "pods-per-node" annotation from the Service manifest.

Note: Enabling or disabling weighted load balancing clears the load balancer's connection tracking table. Existing connections might be routed to different nodes, resulting in those nodes sending TCP reset packets. As a best practice, enable or disable weighted load balancing when a brief period of connection interruptions can be tolerated. Verify the external LoadBalancer Service and its components
  1. Verify that your Service is running:

    kubectl get svc store-v1-lb-svc
    

    The output is similar to the following:

    NAME               TYPE           CLUSTER-IP        EXTERNAL-IP     PORT(S)          AGE
    store-v1-lb-svc   LoadBalancer   10.44.196.160     35.193.28.231   8080:32466/TCP   11m
    

    GKE assigned an EXTERNAL_IP for the external passthrough Network Load Balancer.

    Note: It might take a few minutes for GKE to allocate an external IP address before the load balancer is ready to serve your application.
  2. Test connecting to the load balancer:

    curl EXTERNAL_IP:PORT
    

    Replace the following:

    The output is similar to the following:

    Hostname: store-v1-lb-svc-cdb9bb4d6-hflxd
    
    Pod Information:
      -no pod information available-
    
    Server values:
      server_version=nginx: 1.13.3 - lua: 10008
    
    Request Information:
      client_address=10.128.0.50
      method=GET
      real path=/
      query=
      request_version=1.1
      request_scheme=http
      request_uri=EXTERNAL_IP
    
    Request Headers:
      accept=*/*
      host=EXTERNAL_IP
      user-agent=curl/7.81.0
    
    Request Body:
      -no body in request-
    
    
  3. Check your LoadBalancer Service and its set of annotations describing its Google Cloud resources:

    kubectl describe svc store-v1-lb-svc
    

    The output is similar to the following:

    Name:                     my-service-external
    Namespace:                default
    Labels:                   <none>
    Annotations:              cloud.google.com/l4-rbs: enabled
                              cloud.google.com/neg-status: {"network_endpoint_groups":{"0":"k8s2-qvveq1d8-default-my-service-ext-5s55db85"},"zones":["us-central1-c"]} #This annotation appears in the output only if the service uses NEG backends.
                              networking.gke.io/weighted-load-balancing: pods-per-node #This annotation appears in the output only if weighted load balancing is enabled.
                              service.kubernetes.io/backend-service: k8s2-qvveq1d8-default-my-service-ext-5s55db85
                              service.kubernetes.io/firewall-rule: k8s2-qvveq1d8-default-my-service-ext-5s55db85
                              service.kubernetes.io/firewall-rule-for-hc: k8s2-qvveq1d8-default-my-service-ext-5s55db85-fw
                              service.kubernetes.io/healthcheck: k8s2-qvveq1d8-default-my-service-ext-5s55db85
                              service.kubernetes.io/tcp-forwarding-rule: a808124abf8ce406ca51ab3d4e7d0b7d
    Selector:                 app=my-app
    Type:                     LoadBalancer
    IP Family Policy:         SingleStack
    IP Families:              IPv4
    IP:                       10.18.102.23
    IPs:                      10.18.102.23
    LoadBalancer Ingress:     35.184.160.229
    Port:                     tcp-port  8080/TCP
    TargetPort:               8080/TCP
    NodePort:                 tcp-port  31864/TCP
    Endpoints:                10.20.1.28:8080,10.20.1.29:8080
    Session Affinity:         None
    External Traffic Policy:  Local
    HealthCheck NodePort:     30394
    
    Events:
      Type    Reason                Age                    From                     Message
      ----    ------                ----                   ----                     -------
      Normal  ADD                   4m55s                  loadbalancer-controller  default/my-service-ext
    

    There are several fields that indicate that a backend service-based external passthrough Network Load Balancer and its Google Cloud resources were successfully created:

  4. Verify that load balancer resources and firewall rules have been created for the external LoadBalancer Service:

Delete the external LoadBalancer Service

To delete the sample store-v1-lb-svc external LoadBalancer Service, use the following command:

kubectl delete service store-v1-lb-svc

GKE automatically removes all load balancer resources that it created for the external LoadBalancer Service.

Migrate to GCE_VM_IP NEG backends

External LoadBalancer Services with the cloud.google.com/l4-rbs: "enabled" annotation create backend service-based external passthrough Network Load Balancers that use either GCE_VM_IP network endpoint group or instance group backends backends, depending on the GKE version of the cluster:

For more information, see Node grouping in About LoadBalancer Services.

You can create a new external LoadBalancer Service powered by a backend service-based external passthrough Network Load Balancer that uses GCE_VM_IP NEG backends if your existing Service uses one of the following load balancers:

Note: Switching to a backend service-based external passthrough Network Load Balancer using GCE_VM_IP NEG backends requires a few minutes of downtime to replace the Service.

To switch to a backend service-based external passthrough Network Load Balancer using GCE_VM_IP NEG backends:

  1. If you haven't already, upgrade your cluster to GKE version 1.32.2-gke.1652000 or later.

  2. Identify the external LoadBalancer Service that you want to switch to a backend service-based external passthrough Network Load Balancer using GCE_VM_IP NEG backends. Describe the Service using the following command:

    kubectl describe svc SERVICE_NAME -n SERVICE_NAMESPACE
    

    Replace the following:

    In the command output, note the external IPv4 address used by the existing load balancer in the EXTERNAL-IP column.

  3. Retrieve the Service manifest for the existing LoadBalancer Service:

    Note the local path containing the Service manifest file. The rest of this procedure refers to the path as MANIFEST_FILE_PATH.

  4. Configure a static external IPv4 address resource to hold the external IPv4 address used by the existing load balancer:

    gcloud compute addresses create IP_ADDRESS_NAME --region=CLUSTER_REGION --addresses LB_EXTERNAL_IP
    

    Replace the following:

  5. Verify that the static external IPv4 address resource was created:

    gcloud compute addresses describe IP_ADDRESS_NAME --region=CLUSTER_REGION
    

    Replace the variables as inidicated in the previous step.

  6. Delete the existing Service:

    kubectl delete svc SERVICE_NAME -n SERVICE_NAMESPACE
    
  7. Add the following annotation to the MANIFEST_FILE_PATH Service manifest file:

    networking.gke.io/load-balancer-ip-addresses: IP_ADDRESS_NAME
    

    For more information about this annotation, see Static IP addresses in LoadBalancer Service parameters.

  8. Apply the updated Service manifest to the cluster:

    kubectl apply -f MANIFEST_FILE_PATH
    
  9. (Optional) Release the static IPv4 address resource.

    gcloud compute addresses delete IP_ADDRESS_NAME --region=CLUSTER_REGION
    
Troubleshoot

This section describes an issue that you might encounter when you configure weighted load balancing.

Wrong external traffic policy for weighted load balancing

If you don't set externalTrafficPolicy: Local when you enable weighted load balancing, you might get a warning event when you describe the Service using the following command:

kubectl describe svc store-v1-lb-svc`
Events:
  Type     Reason                   Age      From                     Message
  ----     ------                   ----     ----                     -------
  Warning  UnsupportedConfiguration 4m55s    loadbalancer-controller  Weighted load balancing by pods-per-node has no effect with External Traffic Policy: Cluster.

To effectively enable weighted load balancing, you must set externalTrafficPolicy: Local.

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4