A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://developer.hashicorp.com/nomad/tutorials/cluster-setup/cluster-setup-azure below:

Set up a Nomad cluster on Azure | Nomad

This tutorial will guide you through deploying a Nomad cluster with access control lists (ACLs) enabled on Azure. Consider checking out the cluster setup overview first as it covers the contents of the code repository used in this tutorial.

For this tutorial, you need:

Note

This tutorial creates Azure resources that may not qualify as part of the Azure free tier. Be sure to follow the Cleanup process at the end of this tutorial so you don't incur any additional unnecessary charges.

The cluster setup code repository contains configuration files for creating a Nomad cluster on Azure. It uses Consul for the initial setup of the Nomad servers and clients and enables ACLs for both Consul and Nomad.

Clone the code repository.

$ git clone https://github.com/hashicorp-education/learn-nomad-cluster-setup

Navigate to the cloned repository folder.

$ cd learn-nomad-cluster-setup

Navigate to the azure folder.

There are two main steps to creating the cluster: building a virtual machine image with Packer and provisioning the cluster infrastructure with Terraform. Both Packer and Terraform require that you configure variables before you run commands. The variables.hcl.example file contains the configuration you need for this tutorial.

Update the variables file for Packer

Rename variables.hcl.example to variables.hcl and open it in your text editor.

$ mv variables.hcl.example variables.hcl

Update the location variable with your preferred Azure datacenter location. In this example, the location is eastus. The remaining variables are for Terraform, and you update them after building the VM image.

azure/variables.hcl

# Packer variables (all are required)
location = "eastus"

## ...
Create an Azure resource group

Packer needs an existing resource group in order to build the VM image. Initialize Terraform to download required plugins and set up the workspace.

$ terraform init
Initializing the backend...
# ...
Initializing provider plugins...
# ...
Terraform has been successfully initialized!
# ...

Next, run Terraform in target mode so that it only deploys the resource group for Packer to use. Enter yes to confirm the run.

$ terraform apply -target='azurerm_resource_group.hashistack' -var-file=variables.hcl

## ...

Terraform performs the following actions:

  # azurerm_resource_group.hashistack will be created
  + resource "azurerm_resource_group" "hashistack" {
      + id       = (known after apply)
      + location = "eastus"
      + name     = "hashistack"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

│ Warning: Resource targeting is in effect


## ...

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

azurerm_resource_group.hashistack: Creating...
azurerm_resource_group.hashistack: Creation complete after 1s [id=/subscriptions/dc876065-5a56-4f4e-b8c8-eff71e1071e1/resourceGroups/hashistack]

│ Warning: Applied changes may be incomplete


## ...

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Build the VM image

Now that there is an existing resource group, Packer is ready to build the VM image. First, initialize Packer to download the required plugins.

Tip

packer init returns no output when it finishes successfully.

$ packer init image.pkr.hcl

Then, build the image and provide the variables file with the -var-file flag.

Tip

Packer will print out a Warning: Undefined variable message notifying you that some variables were set in variables.hcl but not used, this is only a warning. The build will still complete successfully.

$ packer build -var-file=variables.hcl image.pkr.hcl

# ...

Build 'azure-arm.hashistack' finished after 7 minutes 46 seconds.

==> Wait completed after 7 minutes 46 seconds

==> Builds finished. The artifacts of successful builds are:
--> azure-arm.hashistack: Azure.ResourceManagement.VMImage:

OSType: Linux
ManagedImageResourceGroupName: hashistack
ManagedImageName: hashistack.20221202190723
ManagedImageId: /subscriptions/dc876065-5a56-4f4e-b8c8-eff71e1071e1/resourceGroups/hashistack/providers/Microsoft.Compute/images/hashistack.20221202190723
ManagedImageLocation: eastus

Packer outputs the specific VM image name once it finishes building the image. In this example, the value is hashistack.20221202190723.

Update the variables file for Terraform

Open variables.hcl in your text editor and update the image_name variable with the value output from the Packer build contained in ManagedImageName. In this example, the value is hashistack.20221202190723.

azure/variables.hcl

# ...

image_name = "hashistack.20221202190723"

# These variables will default to the values shown
# and do not need to be updated unless you want to
# change them
# allowlist_ip                    = "0.0.0.0/0"
# name                            = "nomad"
# server_instance_type            = "t2.micro"
# server_count                    = "3"
# client_instance_type            = "t2.micro"
# client_count                    = "3"

The remaining variables in variables.hcl are optional.

Deploy the Nomad cluster

Run the Terraform deployment and provide the variables file with the -var-file flag. Respond yes to the prompt to confirm the operation. The provisioning takes several minutes. The Consul and Nomad web interfaces are available upon completion.

$ terraform apply -var-file=variables.hcl

# ...

Plan: 34 to add, 0 to change, 0 to destroy.

# ...

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: Yes

# ...


Apply complete! Resources: 28 added, 0 changed, 0 destroyed.

Outputs:

IP_Addresses = <<EOT

Client public IPs: 52.91.50.99, 18.212.78.29, 3.93.189.88

Server public IPs: 107.21.138.240, 54.224.82.187, 3.87.112.200

The Consul UI can be accessed at http://107.21.138.240:8500/ui
with the bootstrap token: dbd4d67b-4629-975c-e9a8-ff1a38ed1520

EOT
consul_bootstrap_token_secret = "dbd4d67b-4629-975c-e9a8-ff1a38ed1520"
lb_address_consul_nomad = "http://107.21.138.240"

Verify the services are in a healthy state. Navigate to the Consul UI in your web browser with the URL in the Terraform output.

Click on the Log in button and use the bootstrap token secret consul_bootstrap_token_secret from the Terraform output to log in.

Click on the Nodes page from the sidebar navigation. There are six healthy nodes, including three Consul servers and three Consul clients created with Terraform.

Run the post-setup.sh script.

Note

It may take some time for the setup scripts to complete and for the Nomad user token to become available in the Consul KV store. If the post-setup.sh script doesn't work the first time, wait a couple of minutes and try again.

$ ./post-setup.sh
The Nomad user token has been saved locally to nomad.token and deleted from the Consul KV store.

Set the following environment variables to access your Nomad cluster with the user token created during setup:

export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646
export NOMAD_TOKEN=$(cat nomad.token)


The Nomad UI can be accessed at http://107.21.138.240:4646/ui
with the bootstrap token: 22444f72-c222-bd26-6c2c-584fb9e5b698

Apply the export commands from the output.

$ export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646 && \
  export NOMAD_TOKEN=$(cat nomad.token)

Finally, verify connectivity to the cluster with nomad node status

$ nomad node status
ID        Node Pool  DC   Name                 Class   Drain  Eligibility  Status
06320436  default    dc1  hashistack-client-1  <none>  false  eligible     ready
6f5076b1  default    dc1  hashistack-client-2  <none>  false  eligible     ready
5fc1e22c  default    dc1  hashistack-client-0  <none>  false  eligible     ready

Navigate to the Nomad UI in your web browser with the URL in the post-setup.sh script output. Click on Sign In in the top right corner and log in with the bootstrap token saved in the NOMAD_TOKEN environment variable. Set the Secret ID to the token's value and click Sign in with secret.

Click on the Clients page from the sidebar navigation and feel free to explore the UI.

Destroy infrastructure

Use terraform destroy to remove the provisioned infrastructure, along with the VM image built by Packer. Respond yes to the prompt to confirm removal.

$ terraform destroy -var-file=variables.hcl

# ...

azurerm_virtual_network.hashistack-vn: Destruction complete after 20s
azurerm_resource_group.hashistack: Destroying... [id=/subscriptions/c9ed8610-47a3-4107-a2b2-a322114dfb29/resourceGroups/hashistack]
azurerm_resource_group.hashistack: Still destroying... [id=/subscriptions/c9ed8610-47a3-4107-a2b2-a322114dfb29/resourceGroups/hashistack, 10s elapsed]
azurerm_resource_group.hashistack: Destruction complete after 16s

Destroy complete! Resources: 35 destroyed.

Delete the VM image

Your Azure account still has the virtual machine image, which you may be charged for. Delete the image by running the az image delete command. In this example, the VM image name is hashistack.20221202190723.

$ az image delete --name hashistack.20221202190723 --resource-group hashistack

In this tutorial you created a Nomad cluster on Azure with Consul and ACLs enabled. From here, you may want to:

For more information, check out the following resources.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4