A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://developer.hashicorp.com/nomad/tutorials/load-balancing/load-balancing-nginx below:

Set up load balancing with NGINX | Nomad

You can use Nomad's template stanza to configure NGINX so that it can dynamically update its load balancer configuration to scale along with your services.

The main use case for NGINX in this scenario is to distribute incoming HTTP(S) and TCP requests from the Internet to front-end services that can handle these requests. This tutorial shows you one such example using a demo web application.

Reference material Prerequisites

To perform the tasks described in this tutorial, you need to have a Nomad environment with Consul installed. You can use this Terraform environment to provision a sandbox environment. This tutorial uses a cluster with one server node and three client nodes.

Note

This tutorial is for demo purposes and only uses a single server node. Please consult the reference architecture for production configuration.

Create a job for a demo web application and name the file webapp.nomad.hcl:

job "demo-webapp" {
  datacenters = ["dc1"]

  group "demo" {
    count = 3
    network {
      port "http" {
        to = -1
      }
    }

    service {
      name = "demo-webapp"
      port = "http"

      check {
        type     = "http"
        path     = "/"
        interval = "2s"
        timeout  = "2s"
      }
    }

    task "server" {
      env {
        PORT    = "${NOMAD_PORT_http}"
        NODE_IP = "${NOMAD_IP_http}"
      }

      driver = "docker"

      config {
        image = "hashicorp/demo-webapp-lb-guide"
        ports = ["http"]
      }
    }
  }
}

This job specification creates three instances of the demo web application for you to target in your NGINX configuration.

Now, deploy the demo web application.

$ nomad run webapp.nomad.hcl
==> Monitoring evaluation "ea1e8528"
    Evaluation triggered by job "demo-webapp"
    Allocation "9b4bac9f" created: node "e4637e03", group "demo"
    Allocation "c386de2d" created: node "983a64df", group "demo"
    Allocation "082653f0" created: node "f5fdf017", group "demo"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "ea1e8528" finished with status "complete"

Create a job for NGINX and name it nginx.nomad.hcl. This NGINX instance balances requests across the deployed instances of the web application.

job "nginx" {
  datacenters = ["dc1"]

  group "nginx" {
    count = 1

    network {
      port "http" {
        static = 8080
      }
    }

    service {
      name = "nginx"
      port = "http"
    }

    task "nginx" {
      driver = "docker"

      config {
        image = "nginx"

        ports = ["http"]

        volumes = [
          "local:/etc/nginx/conf.d",
        ]
      }

      template {
        data = <<EOF
upstream backend {
{{ range service "demo-webapp" }}
  server {{ .Address }}:{{ .Port }};
{{ else }}server 127.0.0.1:65535; # force a 502
{{ end }}
}

server {
   listen 8080;

   location / {
      proxy_pass http://backend;
   }
}
EOF

        destination   = "local/load-balancer.conf"
        change_mode   = "signal"
        change_signal = "SIGHUP"
      }
    }
  }
}

Now, run the NGINX job.

$ nomad run nginx.nomad.hcl
==> Monitoring evaluation "45da5a89"
    Evaluation triggered by job "nginx"
    Allocation "c7f8af51" created: node "983a64df", group "nginx"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "45da5a89" finished with status "complete"

Consul Template supports blocking queries. This means your NGINX deployment (which is using the template stanza) is notified immediately when a change in the health of one of the service endpoints occurs and re-render a new load balancer configuration file that only includes healthy service instances.

You can use the nomad alloc fs command on your NGINX allocation to read the rendered load balancer configuration file.

First, obtain the allocation ID of your NGINX deployment (output below is abbreviated). Keep in mind, allocation IDs are environment specific, so yours is expected to be different:

$ nomad status nginx
ID            = nginx
Name          = nginx
...
Summary
Task Group  Queued  Starting  Running  Failed  Complete  Lost
nginx       0       0         1        0       0         0

Allocations
ID        Node ID   Task Group  Version  Desired  Status   Created     Modified
76692834  f5fdf017  nginx       0        run      running  17m40s ago  17m25s ago

Next, use the alloc fs command to read the load balancer configuration:

$ nomad alloc fs 766 nginx/local/load-balancer.conf
upstream backend {

  server 172.31.48.118:21354;

  server 172.31.52.52:25958;

  server 172.31.52.7:29728;

}

server {
   listen 80;

   location / {
      proxy_pass http://backend;
   }
}

At this point, you can change the count of your demo-webapp job and repeat the previous command to verify the load balancer configuration is dynamically changing.

If you query the NGINX load balancer, you should be able to see a response similar to the one shown below (this command should be run from a node inside your cluster):

$ curl nginx.service.consul:8080
Welcome! You are on node 172.31.48.118:21354

Note that your request has been forwarded to one of the several deployed instances of the demo web application (which is spread across 3 Nomad clients). The output shows the IP address of the host it is deployed on. If you repeat your requests, the IP address changes based on which backend web server instance received the request.

Note

If you would like to access NGINX from outside your cluster, you can set up a load balancer in your environment that maps to an active port 8080 on your clients (or whichever port you have configured for NGINX to listen on). You can then send your requests directly to your external load balancer.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4