A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer below:

Set up an external Application Load Balancer with Ingress | GKE networking

Deploying a web application

The following manifest describes a Deployment that runs the sample web application container image on an HTTP server on port 8080:

Apply the resource to the cluster:

kubectl apply -f web-deployment.yaml
Exposing your Deployment inside your cluster

The following manifest describes a Service that makes the web deployment accessible within your container cluster:

  1. Apply the resource to the cluster:

    kubectl apply -f web-service.yaml
    

    When you create a Service of type NodePort with this command, GKE makes your Service available on a randomly selected high port number (e.g. 32640) on all the nodes in your cluster.

  2. Verify that the Service is created and a node port is allocated:

    kubectl get service web
    
    Output:
    NAME      TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    web       NodePort   10.35.245.219   <none>        8080:32640/TCP   5m
    

    In the sample output, the node port for the web Service is 32640. Also, note that there is no external IP allocated for this Service. Since the GKE nodes are not externally accessible by default, creating this Service does not make your application accessible from the internet.

To make your HTTP(S) web server application publicly accessible, you must create an Ingress resource.

Creating an Ingress resource

Ingress is a Kubernetes resource that encapsulates a collection of rules and configuration for routing external HTTP(S) traffic to internal services.

On GKE, Ingress is implemented using Cloud Load Balancing. When you create an Ingress in your cluster, GKE creates an HTTP(S) load balancer and configures it to route traffic to your application.

The following manifest describes an Ingress resource that directs traffic to your web Service:

Apply the resource to the cluster:

kubectl apply -f basic-ingress.yaml

After you deploy this manifest, Kubernetes creates an Ingress resource on your cluster. The GKE Ingress controller creates and configures an HTTP(S) Load Balancer according to the information in the Ingress, routing all external HTTP traffic (on port 80) to the web NodePort Service you exposed.

Note: To use Ingress, you must have the external Application Load Balancer add-on enabled. GKE clusters have external Application Load Balancers enabled by default; you must not disable it. Visiting your application

Find out the external IP address of the load balancer serving your application by running:

kubectl get ingress basic-ingress
Output:
NAME            HOSTS     ADDRESS         PORTS     AGE
basic-ingress   *         203.0.113.12    80        2m
Note: It might take a few minutes for GKE to allocate an external IP address and set up forwarding rules before the load balancer is ready to serve your application. You might get errors such as HTTP 404 or HTTP 500 until the load balancer configuration is propagated across the globe.

Open the external IP address of your application in a browser and see a plain text HTTP response like the following:

Hello, world!
Version: 1.0.0
Hostname: web-6498765b79-fq5q5

You can visit Load Balancing on the Google Cloud console and inspect the networking resources created by the GKE Ingress controller.

(Optional) Configuring a static IP address

When you expose a web server on a domain name, you need the external IP address of an application to be a static IP that does not change.

By default, GKE allocates ephemeral external IP addresses for HTTP applications exposed through an Ingress. Ephemeral addresses are subject to change. If you are planning to run your application for a long time, you must use a static external IP address.

Note that after you configure a static IP for the Ingress resource, deleting the Ingress does not delete the static IP address associated with it. Make sure to clean up the static IP addresses you configured when you no longer plan to use them again.

Warning: The following instructions create a static IP address and then reference the IP address in an Ingress manifest. If you modify an existing Ingress to use a static IP address instead of an ephemeral IP address, GKE might change the IP address of the load balancer when GKE re-creates the forwarding rule of the load balancer.

To configure a static IP address, complete the following steps:

  1. Reserve a static external IP address named web-static-ip:

    gcloud
    gcloud compute addresses create web-static-ip --global
    
    Config Connector

    Note: This step requires Config Connector. Follow the installation instructions to install Config Connector on your cluster.

    To deploy this manifest, download it to your machine as compute-address.yaml, and run:
    kubectl apply -f compute-address.yaml
  2. The basic-ingress-static.yaml manifest adds an annotation on Ingress to use the static IP resource named web-static-ip:

    View the manifest:

    cat basic-ingress-static.yaml
    
  3. Apply the resource to the cluster:

    kubectl apply -f basic-ingress-static.yaml
    
  4. Check the external IP address:

    kubectl get ingress basic-ingress
    

    Wait until the IP address of your application changes to use the reserved IP address of the web-static-ip resource.

    It might take a few minutes to update the existing Ingress resource, re-configure the load balancer, and propagate the load balancing rules across the globe. After this operation completes, GKE releases the ephemeral IP address previously allocated to your application.

Note: Unused static external IP address are billed according to the regular IP address billing. (Optional) Serving multiple applications on a load balancer

You can run multiple services on a single load balancer and public IP by configuring routing rules on the Ingress. By hosting multiple services on the same Ingress, you can avoid creating additional load balancers (which are billable resources) for every Service that you expose to the internet.

The following manifest describes a Deployment with version 2.0 of the same web application:

Apply the resource to the cluster:

kubectl apply -f web-deployment-v2.yaml

The following manifest describes a Service that exposes web2 internally to the cluster on a NodePort Service called web2:

Apply the resource to the cluster:

kubectl apply -f web-service-v2.yaml

The following manifest describes an Ingress resource that:

Apply the resource to the cluster:

kubectl create -f fanout-ingress.yaml

After the Ingress is deployed, run kubectl get ingress fanout-ingress to find out the public IP address of the cluster.

Note: It might take a few minutes for GKE to allocate an external IP address and prepare the load balancer. You might get errors like HTTP 404 and HTTP 500 until the load balancer is ready to serve the traffic.

Then visit the IP address to see that both applications are reachable on the same load balancer:

The only supported wildcard character for the path field of an Ingress is the * character. The * character must follow a forward slash (/) and must be the last character in the pattern. For example, /*, /foo/*, and /foo/bar/* are valid patterns, but *, /foo/bar*, and /foo/*/bar are not.

A more specific pattern takes precedence over a less specific pattern. If you have both /foo/* and /foo/bar/*, then /foo/bar/bat is taken to match /foo/bar/*.

For more information about path limitations and pattern matching, see the URL Maps documentation.

(Optional) Monitoring the availability and latency of your service

Google Cloud Uptime checks perform blackbox monitoring of applications from the viewpoint of the user, determining latency and availability from multiple external IPs to the IP address of the load balancer. In comparison, Google Cloud health checks perform an internal check against the Pod IPs, determining availability at the instance level. The checks are complementary and provide a holistic picture of application health.

You can create an uptime check by using the Google Cloud console, the Cloud Monitoring API, or by using the Cloud Monitoring client libraries. For information, see Managing uptime checks. If you want to create an uptime check by using the Google Cloud console, do the following:

  1. Go to the Services & Ingress page in the Google Cloud console.

    Go to Services & Ingress

  2. Click the name of the Service you want to create an uptime check for.

  3. Click Create Uptime Check.

  4. In the Create Uptime Check pane, enter a title for the uptime check and then click Next to advance to the Target settings.

    The Target fields of the uptime check are automatically filled in using the information from the Service load balancer.

    For complete documentation on all the fields in an uptime check, see Creating an uptime check.

  5. Click Next to advance to the Response Validation settings.

  6. Click Next to advance to the Alert and Notification section.

    To monitor an uptime check, you can create an alerting policy or view the uptime check dashboard. An alerting policy can notify you by email or through a different channel if your uptime check fails. For general information about alerting policies, see Introduction to alerting.

    Note: You can create an alerting policy for an uptime check as part of the process of creating the uptime check. Creating an alerting policy is optional, but it is recommended. For information on how to create an alerting policy as an independent action, see Alerting on uptime checks
  7. Click Create.

By default, Ingress performs a periodic health check by making a GET request on the / path to determine the health of the application, and expects a HTTP 200 response. If you want to check a different path or to expect a different response code, you can use a custom health check path.

Ingress supports more advanced use cases, such as:

Note: Always modify the properties of the Load Balancer using the Ingress object. Making changes directly on the load balancing resources might get lost or overridden by the GKE Ingress controller.

When an Ingress is deleted, the GKE Ingress controller cleans up the associated resources (except reserved static IP addresses) automatically.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4