A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://developer.hashicorp.com/consul/tutorials/get-started-vms/virtual-machine-gs-deploy below:

Deploy Consul on VMs | Consul

Consul is a service networking solution that helps you manage secure network connectivity between services, and that works across on-premise and multi-cloud environments and runtimes. Consul offers service discovery, service mesh, traffic management, and automated updates to network infrastructure devices. Check out the What is Consul? page to learn more.

In this tutorial, you will configure, deploy, and bootstrap a Consul server on
a virtual machine (VM). After deploying Consul, you will interact with Consul using the UI, CLI, and API.

You will use this server in the other tutorials of the Get Started on VMs tutorial collection. In those tutorials you will deploy a demo application, configure it to use Consul service discovery, secure it with service mesh, allow external traffic into the service mesh, and enhance observability into your service mesh. During the process, you will learn how to leverage Consul to securely connect your services running on any environment.

In this tutorial, you will:

Note

Because this tutorial is part of the Get Started on VMs tutorial collection, the following workflow was designed for education and demonstration. It uses scripts to generate agent configurations and requires you to execute commands manually on different nodes. If you are setting up a production environment you should codify and automate the installation and deployment process according to your infrastructure and networking needs. Refer to the VM production patterns tutorial collection for Consul production deployment considerations and best practices.

Tutorial scenario

This tutorial uses HashiCups, a demo coffee shop application made up of several microservices running on VMs.

At the beginning of this tutorial, there are six VMs: an instance of the HashiCups application running on four VMs, one empty VM that you will deploy Consul server on, and one Bastion host to interact with the other VMs.

At the end of this tutorial, you will have deployed a Consul server agent running on one of the machines.

For this tutorial, you will need:

Clone GitHub repository

Clone the GitHub repository containing the configuration files and resources.

$ git clone https://github.com/hashicorp-education/learn-consul-get-started-vms
$ git clone git@github.com:hashicorp-education/learn-consul-get-started-vms

Change into the directory that contains the complete configuration files for this tutorial.

$ cd learn-consul-get-started-vms/self-managed/infrastructure/aws
Create infrastructure

With these Terraform configuration files, you are ready to deploy your infrastructure.

Issue the terraform init command from your working directory to download the necessary providers and initialize the backend.

$ terraform init
Initializing the backend...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...

Then, deploy the resources. Enter yes to confirm the run.

$ terraform apply --var-file=../../ops/conf/GS_00_base_scenario.tfvars
## ...
Do you want to perform these actions?
 Terraform will perform the actions described above.
 Only 'yes' will be accepted to approve.
 Enter a value: yes
## ...
Apply complete! Resources: 49 added, 0 changed, 0 destroyed.

The Terraform deployment could take up to 15 minutes to complete. Feel free to explore the other sections of this tutorial while you wait for the environment to finish initialization.

After it completes deployment, Terraform returns a list of outputs you can use to interact with the newly created environment.

Outputs:

connection_string = "ssh -i certs/id_rsa.pem admin@`terraform output -raw ip_bastion`"
ip_bastion = "54.185.247.184"
retry_join = <sensitive>
ui_consul = "https://18.236.102.73:8443"
ui_grafana = "http://54.185.247.184:3000/d/hashicups/hashicups"
ui_hashicups = "http://34.220.205.132"
ui_hashicups_API_GW = "https://35.91.171.47:8443"

The Terraform output provides useful information, including the bastion host IP address. The following is a brief description of the Terraform outputs:

Login into the bastion host VM

Log in to the bastion host using ssh.

$ ssh -i certs/id_rsa.pem admin@`terraform output -raw ip_bastion`
Verify Consul binary

You deploy the Consul server on the bastion host, so make sure the Consul binary is installed.

$ consul version 
Consul v1.20.2
Revision 33e5727a
Build Date 2025-01-03T14:38:40Z
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)

The interactive lab environment setup installs scripts and supporting files from the learn-consul-get-started-vms repository on GitHub. Verify that the scripts are on the Bastion host.

$ tree ~/ops/scenarios/00_base_scenario_files/supporting_scripts/ 
~/ops/scenarios/00_base_scenario_files/supporting_scripts/
|-- download_consul_esm.sh
|-- download_consul_template.sh
|-- generate_consul_client_config.sh
|-- generate_consul_monitoring_config.sh
|-- generate_consul_server_config.sh
|-- generate_consul_server_tokens.sh
|-- generate_global_config_hashicups.sh
`-- generate_hashicups_service_config.sh

0 directories, 8 files

The scripts use environment variables to generate the configuration files. Source the env-scenario.env file to set the variables in the terminal session.

$ source assets/scenario/env-scenario.env

Verify that the variables exported correctly in the environment.

$ env | grep CONSUL_
CONSUL_SERVER_NUMBER=1
CONSUL_RETRY_JOIN=<retry join string>
CONSUL_DATACENTER=dc1
CONSUL_DOMAIN=consul
CONSUL_DATA_DIR=/opt/consul/
CONSUL_CONFIG_DIR=/etc/consul.d/

The scripts also require a destination folder for the files they create. Export the path where you want to create the configuration files for the scenario.

$ export OUTPUT_FOLDER=/home/admin/assets/scenario/conf/

Note

When following this tutorial, we suggest you use the default variables to help you avoid typos and focus on the process. If you decide to use custom values, verify that export commands always use the correct custom values.

Make sure the folder exists.

$ mkdir -p ${OUTPUT_FOLDER}

Generate all necessary files to configure and run the Consul server agent.

$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_consul_server_config.sh 
 [generate_consul_server_config.sh] - Generate Consul servers configuration

+ --------------------
| Parameter Check
+ --------------------
[WARN] Script is running with the following values
[WARN] ----------
[WARN] CONSUL_DATACENTER = dc1
[WARN] CONSUL_DOMAIN = consul
[WARN] CONSUL_SERVER_NUMBER = 1
[WARN] CONSUL_RETRY_JOIN = consul-server-0
[WARN] CONSUL_CONFIG_DIR = /etc/consul.d/
[WARN] CONSUL_DATA_DIR = /opt/consul
[WARN] ----------
[WARN] Generated configuration will be placed under:
[WARN] OUTPUT_FOLDER = ~/assets/scenario/conf/
[WARN] ----------

+ --------------------
| Prepare folder
+ --------------------
 - Cleaning folder from pre-existing files
[WARN] Removing pre-existing configuration in ~/assets/scenario/conf/
 - Generate scenario config folders.

+ --------------------
| Generate secrets
+ --------------------
Generating Gossip Encryption Key.
Generate CA for *.dc1.consul
==> Saved consul-agent-ca.pem
==> Saved consul-agent-ca-key.pem
Generate Server Certificates
==> WARNING: Server Certificates grants authority to become a
    server and access all state in the cluster including root keys
    and all ACL tokens. Do not distribute them to production hosts
    that are not server nodes. Store them as securely as CA keys.
==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-server-consul-0.pem
==> Saved dc1-server-consul-0-key.pem

+ --------------------
| Generate Consul server agent configuration
+ --------------------
Generating Configuration for consul-server-0
 - Copy certificate files
 - Generate consul.hcl - requirement for systemd service
 - Generate agent-server-specific.hcl - server specific configuration
 - Generate agent-server-specific-ui.hcl - server specific UI configuration
 - Generate agent-server-networking.hcl - server networking configuration
 - Generate agent-server-tls.hcl - server TLS configuration
 - Generate agent-server-acl.hcl - server ACL configuration
 - Generate agent-server-telemetry.hcl - server telemetry configuration
 - Validate configuration for consul-server-0
+ --------------------

When the script completes, list the generated files.

$ tree ${OUTPUT_FOLDER} 
~/assets/scenario/conf/
|-- consul-server-0
|   |-- agent-gossip-encryption.hcl
|   |-- agent-server-acl.hcl
|   |-- agent-server-networking.hcl
|   |-- agent-server-specific-ui.hcl
|   |-- agent-server-specific.hcl
|   |-- agent-server-telemetry.hcl
|   |-- agent-server-tls.hcl
|   |-- consul-agent-ca.pem
|   |-- consul-agent-key.pem
|   |-- consul-agent.pem
|   `-- consul.hcl
`-- secrets
    |-- agent-gossip-encryption.hcl
    |-- consul-agent-ca-key.pem
    |-- consul-agent-ca.pem
    |-- dc1-server-consul-0-key.pem
    `-- dc1-server-consul-0.pem

2 directories, 16 files
Test configuration

Verify the configuration generated for consul-server-0 is valid. Despite the INFO messages, the Consul configuration files are valid.

$ consul validate ${OUTPUT_FOLDER}/consul-server-0 
skipping file ~/assets/scenario/conf/consul-server-0/consul-agent-ca.pem, extension must be .hcl or .json, or config format must be set
skipping file ~/assets/scenario/conf/consul-server-0/consul-agent-key.pem, extension must be .hcl or .json, or config format must be set
skipping file ~/assets/scenario/conf/consul-server-0/consul-agent.pem, extension must be .hcl or .json, or config format must be set
BootstrapExpect is set to 1; this is the same as Bootstrap mode.
bootstrap = true: do not enable unless necessary
Configuration is valid!
Copy configuration on Consul server node

Copy the configuration files to the consul-server-0 VM.

First, configure the Consul configuration directory.

$ export CONSUL_REMOTE_CONFIG_DIR=/etc/consul.d/

Then, remove existing configuration from the server.

$ ssh -i certs/id_rsa consul-server-0 "sudo rm -rf ${CONSUL_REMOTE_CONFIG_DIR}*"

Finally, use rsync to copy the configuration into the server node.

$ rsync -av --no-g --no-t --no-p \
    -e "ssh -i ~/certs/id_rsa" \
    ${OUTPUT_FOLDER}consul-server-0/ \
    consul-server-0:${CONSUL_REMOTE_CONFIG_DIR}

Output is similar to the following:

sending incremental file list
./
agent-gossip-encryption.hcl
agent-server-acl.hcl
agent-server-networking.hcl
agent-server-specific-ui.hcl
agent-server-specific.hcl
agent-server-telemetry.hcl
agent-server-tls.hcl
consul-agent-ca.pem
consul-agent-key.pem
consul-agent.pem
consul.hcl

sent 6,877 bytes  received 228 bytes  14,210.00 bytes/sec
total size is 6,024  speedup is 0.85

Login to consul-server-0 from the bastion host.

$ ssh -i certs/id_rsa consul-server-0
##..
admin@consul-server-0:~

Make sure your user has write permissions in the Consul data directory.

$ sudo chmod g+w /opt/consul/

Finally, start the Consul server process.

$ consul agent -config-dir=/etc/consul.d/ > /tmp/consul-server.log 2>&1 &

The command starts the Consul server in the background to avoid a lock on the terminal. You can access the Consul server log through the /tmp/consul-server.log file.

$ cat /tmp/consul-server.log 
==> Starting Consul agent...
               Version: '1.20.2'
            Build Date: '2025-01-03 14:38:40 +0000 UTC'
               Node ID: 'bc1c1796-57f6-22c4-808c-4b8af45034da'
             Node name: 'consul-server-0'
            Datacenter: 'dc1' (Segment: '<all>')
                Server: true (Bootstrap: true)
           Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: 8443, gRPC: -1, gRPC-TLS: 8503, DNS: 53)
          Cluster Addr: 172.18.0.3 (LAN: 8301, WAN: 8302)
     Gossip Encryption: true
      Auto-Encrypt-TLS: true
           ACL Enabled: true
     Reporting Enabled: false
    ACL Default Policy: deny
             HTTPS TLS: Verify Incoming: false, Verify Outgoing: true, Min Version: TLSv1_2
              gRPC TLS: Verify Incoming: false, Min Version: TLSv1_2
      Internal RPC TLS: Verify Incoming: true, Verify Outgoing: true (Verify Hostname: true), Min Version: TLSv1_2

==> Log data will now stream in as it occurs:

[WARN]  agent: skipping file /etc/consul.d/consul-agent-ca.pem, extension must be .hcl or .json, or config format must be set
[WARN]  agent: skipping file /etc/consul.d/consul-agent-key.pem, extension must be .hcl or .json, or config format must be set
[WARN]  agent: skipping file /etc/consul.d/consul-agent.pem, extension must be .hcl or .json, or config format must be set
[WARN]  agent: BootstrapExpect is set to 1; this is the same as Bootstrap mode.
[WARN]  agent: bootstrap = true: do not enable unless necessary
[INFO]  agent.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bc1c1796-57f6-22c4-808c-4b8af45034da Address:172.18.0.3:8300}]"
[INFO]  agent.server.raft: entering follower state: follower="Node at 172.18.0.3:8300 [Follower]" leader-address= leader-id=
[INFO]  agent.server.serf.wan: serf: EventMemberJoin: consul-server-0.dc1 172.18.0.3
[INFO]  agent.server.serf.lan: serf: EventMemberJoin: consul-server-0 172.18.0.3
[INFO]  agent.router: Initializing LAN area manager
[DEBUG] agent.grpc.balancer: switching server: target=consul://dc1.bc1c1796-57f6-22c4-808c-4b8af45034da/server.dc1 from=<none> to=dc1-172.18.0.3:8300
[INFO]  agent.server.autopilot: reconciliation now disabled
[INFO]  agent.server: Adding LAN server: server="consul-server-0 (Addr: tcp/172.18.0.3:8300) (DC: dc1)"
[INFO]  agent.server: Handled event for server in area: event=member-join server=consul-server-0.dc1 area=wan
[INFO]  agent.server.cert-manager: initialized server certificate management
##...
[INFO]  agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http
[INFO]  agent: Starting server: address=[::]:8443 network=tcp protocol=https
[INFO]  agent: Started gRPC listeners: port_name=grpc_tls address=127.0.0.1:8503 network=tcp
[INFO]  agent: Retry join is supported for the following discovery methods: cluster=LAN discovery_methods="aliyun aws azure digitalocean gce hcp k8s linode mdns os packet scaleway softlayer tencentcloud triton vsphere"
[INFO]  agent: Joining cluster...: cluster=LAN
[INFO]  agent: (LAN) joining: lan_addresses=["consul-server-0"]
[INFO]  agent: started state syncer
[INFO]  agent: Consul agent running!
[INFO]  agent: (LAN) joined: number_of_nodes=1
[INFO]  agent: Join cluster completed. Synced with initial agents: cluster=LAN num_agents=1
##...

Exit the SSH session to return to the bastion host.

$ exit
logout
Connection to consul-server-0 closed.
admin@bastion:~$

To interact with the Consul server, you need to set up your terminal.

Make sure the scenario environment variables are still defined.

$ export CONSUL_DOMAIN=consul \
  export CONSUL_DATACENTER=dc1 \
  export OUTPUT_FOLDER=/home/admin/assets/scenario/conf/

Export the following environment variables to configure the Consul CLI to interact with the Consul server.

$ export CONSUL_HTTP_ADDR="https://consul-server-0:8443" \
  export CONSUL_HTTP_SSL=true \
  export CONSUL_CACERT="${OUTPUT_FOLDER}secrets/consul-agent-ca.pem" \
  export CONSUL_TLS_SERVER_NAME="server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}"

Execute the consult info command to verify that the Consul CLI can reach your Consul server.

The output informs you that, while the Consul CLI can reach your Consul server, Consul's ACLs are blocking the request.

$ consul info 
Error querying agent: Unexpected response code: 403 (Permission denied: anonymous token lacks permission 'agent:read' on "consul-server-0". The anonymous token is used implicitly when a request does not specify a token.)

Bootstrap the Consul ACL system and save the output in a file named acl-token-bootstrap.json.

$ consul acl bootstrap --format json | tee ${OUTPUT_FOLDER}secrets/acl-token-bootstrap.json
{
    "CreateIndex": 21,
    "ModifyIndex": 21,
    "AccessorID": "c779a34c-f978-93f8-30e3-733f480821a0",
    "SecretID": "a3301c84-e67a-dd2b-39e9-d746e21b6766",
    "Description": "Bootstrap Token (Global Management)",
    "Policies": [
        {
            "ID": "00000000-0000-0000-0000-000000000001",
            "Name": "global-management"
        }
    ],
    "Local": false,
    "CreateTime": "2025-01-21T11:31:32.671810716Z",
    "Hash": "X2AgaFhnQGRhSSF/h0m6qpX1wj/HJWbyXcxkEM/5GrY="
}

The command generates a global management token with full permissions over your datacenter. The management token is the value associated with the SecretID key.

Extract the management token from the file and set it to the CONSUL_HTTP_TOKEN environment variable.

$ export CONSUL_HTTP_TOKEN=`cat ${OUTPUT_FOLDER}secrets/acl-token-bootstrap.json | jq -r ".SecretID"`

Now that the ACL system is bootstrapped, execute the consul info command again to interact with the Consul server.

$ consul info 
agent:
    check_monitors = 0
    check_ttls = 0
    checks = 0
    services = 0
build:
    prerelease = 
    revision = 33e5727a
    version = 1.20.2
    version_metadata = 
consul:
    acl = enabled
    bootstrap = true
    known_datacenters = 1
    leader = true
    leader_addr = 172.18.0.3:8300
    server = true
raft:
    ## ...
runtime:
    ## ...
serf_lan:
    ## ...
    encrypted = true
    ## ...
    members = 1
    ## ...
serf_wan:
    ## ...
    encrypted = true
    ## ...
    members = 1
    ## ...

The Consul datacenter's ACL system is now fully bootstrapped, and the server agent is ready to receive requests. To complete the Consul server's configuration, create ACL tokens for the server agent to use.

In this section, the generate_consul_sever_tokens.sh script automates the process of creating policies and tokens for your Consul server. This script generates two ACL tokens with different policies for Consul DNS service and for the server agent, and then applies them to the Consul server.

In the terminal with your bastion host, run the generate_consul_server_tokens.sh script to create the ACL policies and tokens for your Consul server.

$ ~/ops/scenarios/00_base_scenario_files/supporting_scripts/generate_consul_server_tokens.sh 
[generate_consul_server_tokens.sh] - - Generate Consul server tokens

+ --------------------
| Parameter Check
+ --------------------
[WARN] Script is running with the following values
[WARN] ----------
[WARN] CONSUL_DATACENTER = dc1
[WARN] CONSUL_DOMAIN = consul
[WARN] CONSUL_SERVER_NUMBER = 1
[WARN] ----------
[WARN] Generated configuration will be placed under:
[WARN] OUTPUT_FOLDER = ~/assets/scenario/conf/
[WARN] ----------

+ --------------------
| Prepare folder
+ --------------------

+ --------------------
| Create Consul ACL policies and tokens
+ --------------------
 - Define policies
   - [ acl-policy-dns.hcl ]
   - [ acl-policy-server-node ]
 - Configure CLI to communicate with Consul
 - Create Consul ACL policies
 - Create Consul ACL tokens

   - Set tokens for consul-server-0
ACL token "agent" set successfully
ACL token "default" set successfully

After you create the server tokens, your Consul logs show the updated ACL tokens.

In this section, you manually create ACL tokens for the Consul DNS service and your server agent. Perform the following steps for each token:

  1. Define the ACL riles in a policy configuration file.
  2. Create the Consul policy.
  3. Create the ACLs token for the policy.
  4. Assign the token to the server agent.
Define the ACL rules in policy configuration files

First, define the DNS policy file. The following policy grants the Consul DNS read access to all nodes, services, and prepared queries. This access is sufficient for Consul DNS requests, and does not grant unneeded permissions.

$ tee ./acl-policy-dns.hcl > /dev/null << EOF
## acl-policy-dns.hcl
node_prefix "" {
  policy = "read"
}
service_prefix "" {
  policy = "read"
}
# Required if you use prepared queries
query_prefix "" {
  policy = "read"
}
EOF

This command produces no output.

Then, define the server policy in a file named acl-policy-server-node.hcl. The following policy grants nodes where Consul server agents run write access to nodes with names that begin with consul. In these tutorials, Consul server agents run on nodes that follow the pattern consul-server-0.

$ tee ./acl-policy-server-node.hcl > /dev/null << EOF
## acl-policy-server-node.hcl
node_prefix "consul" {
  policy = "write"
}
EOF

This command produces no output.

Create the Consul policies

First, create the DNS policy with the rules you configured.

$ consul acl policy create \
  -name "acl-policy-dns" \
  -description "Policy for DNS endpoints" \
  -rules @./acl-policy-dns.hcl

The output is similar to the following:

ID:           4a2c390e-6a02-e106-0577-b40fbed7c827
Name:         acl-policy-dns
Description:  Policy for DNS endpoints
Datacenters:  
Rules:
## dns-request-policy.hcl
node_prefix "" {
  policy = "read"
}
service_prefix "" {
  policy = "read"
}
# Required if you use prepared queries
query_prefix "" {
  policy = "read"
}

Then, create the server policy with the rules you configured.

$ consul acl policy create \
  -name "acl-policy-server-node" \
  -description "Policy for Server nodes" \
  -rules @./acl-policy-server-node.hcl

The output is similar to the following:

ID:           88a758b2-9f1c-d995-9f0a-62ef779b23c7
Name:         acl-policy-server-node
Description:  Policy for Server nodes
Datacenters:  
Rules:
## consul-server-one-policy.hcl
node_prefix "consul" {
  policy = "write"
}
Create ACL tokens for each policy

First, create the DNS token in a file named acl-token-dns.json.

$ consul acl token create \
  -description "DNS - Default token" \
  -policy-name acl-policy-dns \
  --format json | tee ./acl-token-dns.json

The output is similar to the following:

{
    "CreateIndex": 30,
    "ModifyIndex": 30,
    "AccessorID": "6579cea9-3e05-98f1-270a-e573fe5d9c86",
    "SecretID": "24147a64-0a8a-08fa-9e33-83172eea0419",
    "Description": "DNS - Default token",
    "Policies": [
        {
            "ID": "4a2c390e-6a02-e106-0577-b40fbed7c827",
            "Name": "acl-policy-dns"
        }
    ],
    "Local": false,
    "CreateTime": "2025-01-21T11:31:33.191074175Z",
    "Hash": "WScAsU8KzzJfQqf8Vw4OnG9wU7p2K4HV3IIVTsiaxJc="
}

Then, create the server node token in a file named server-acl-token.json.

$ consul acl token create \
  -description "server agent token" \
  -policy-name acl-policy-server-node  \
  --format json | tee ./server-acl-token.json

The output is similar to the following:

{
    "CreateIndex": 31,
    "ModifyIndex": 31,
    "AccessorID": "2fc4c019-db5c-3d3c-174b-d9cc530ee9e3",
    "SecretID": "86b4d0cd-e292-d8a6-f7d2-c2390b38cfd2",
    "Description": "server agent token",
    "Policies": [
        {
            "ID": "88a758b2-9f1c-d995-9f0a-62ef779b23c7",
            "Name": "acl-policy-server-node"
        }
    ],
    "Local": false,
    "CreateTime": "2025-01-21T11:31:33.217366633Z",
    "Hash": "13p32TvAqcNQuHa9+qzMCfeLC5VR/ZeG7euC/Aqlr2Q="
}
Assign tokens to the server agent

First, define two environment variables containing the SecretID of each token.

$ export DNS_TOKEN=`cat ./acl-token-dns.json | jq -r ".SecretID"`; \
  export SERVER_TOKEN=`cat ./server-acl-token.json | jq -r ".SecretID"`

Next, assign the DNS token to the server as the default token.

$ consul acl set-agent-token default ${DNS_TOKEN} 
ACL token "default" set successfully

Finally, assign the server token to the server as the agent token.

$ consul acl set-agent-token agent ${SERVER_TOKEN} 
ACL token "agent" set successfully

Your Consul logs show the updated ACL tokens.

## ...
[INFO]  agent: Updated agent's ACL token: token=agent
## ...
[INFO]  agent: Updated agent's ACL token: token=default
## ...

Use the CLI, API, or UI to retrieve and review information about the Consul datacenter.

Use the Consul CLI to retrieve members in your Consul datacenter.

$ consul members 
Node             Address          Status  Type    Build   Protocol  DC   Partition  Segment
consul-server-0  172.18.0.3:8301  alive   server  1.20.2  2         dc1  default    <all>

Refer to the Consul CLI commands reference for the full list of available commands.

Query the Consul API to retrieve members in your Consul datacenter.

$ curl --silent \
  --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
  --connect-to server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}:8443:consul-server-0:8443 \
  --cacert ~/assets/scenario/conf/secrets/consul-agent-ca.pem \
  https://server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}:8443/v1/agent/members | jq

The output is similar to the following:

[
  {
    "Name": "consul-server-0",
    "Addr": "172.18.0.3",
    "Port": 8301,
    "Tags": {
      "acls": "1",
      "bootstrap": "1",
      "build": "1.20.2:33e5727a",
      "dc": "dc1",
      "ft_fs": "1",
      "ft_si": "1",
      "grpc_tls_port": "8503",
      "id": "bc1c1796-57f6-22c4-808c-4b8af45034da",
      "port": "8300",
      "raft_vsn": "3",
      "role": "consul",
      "segment": "",
      "use_tls": "1",
      "vsn": "2",
      "vsn_max": "3",
      "vsn_min": "2",
      "wan_join_port": "8302"
    },
    "Status": 1,
    "ProtocolMin": 1,
    "ProtocolMax": 5,
    "ProtocolCur": 2,
    "DelegateMin": 2,
    "DelegateMax": 5,
    "DelegateCur": 4
  }
]

Refer to the Consul HTTP API reference for the full list of available endpoints.

Retrieve the Consul UI address from Terraform.

$ terraform output -raw ui_consul

Open the address in a browser.

After accepting the certificate presented by the Consul server, you will land on the Services page.

Consul UI permissions are the same as the ones associated with the default token configured for the VM where the UI is running. In this scenario, on the Consul server VM, we configured that the token is able to see all nodes and services using the acl-dns-policy.

Security Note

We recommend fine-tuning the default policy for your own security requirements when planning your production configuration.

If you try to access an unauthorized resource, you will get redirected to a 403 page saying you are not authorized to get information from your Consul datacenter unless you successfully authenticate using a token.

To authenticate, click on Login and enter the management token you created while bootstrapping the Consul server.

Once authenticated, Consul will redirected to the Services page where you will find a healthy and available consul service.

Consul includes a key/value (KV) store that you can use to manage your service's configuration. Even though you can use the KV store using the CLI, API, and UI, this tutorial only covers the CLI and API methods.

Select the tab for your preferred method.

Create a key named db_port with a value of 5432.

$ consul kv put consul/configuration/db_port 5432 
Success! Data written to: consul/configuration/db_port

Then, retrieve the value.

$ consul kv get consul/configuration/db_port 
5432

Create a key named db_port with a value of 5432.

$ curl --silent \
  --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
  --connect-to server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}:8443:consul-server-0:8443 \
  --cacert ~/assets/scenario/conf/secrets/consul-agent-ca.pem \
  --request PUT \
  --data "5432" \
  https://server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}:8443/v1/kv/consul/configuration/db_port

The output is similar to the following:

Then, retrieve the value.

$ curl --silent \
  --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
  --connect-to server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}:8443:consul-server-0:8443 \
  --cacert ~/assets/scenario/conf/secrets/consul-agent-ca.pem \
  https://server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}:8443/v1/kv/consul/configuration/db_port | jq

The output is similar to the following:

[
  {
    "LockIndex": 0,
    "Key": "consul/configuration/db_port",
    "Flags": 0,
    "Value": "NTQzMg==",
    "CreateIndex": 32,
    "ModifyIndex": 32
  }
]

Notice the response returns the base64 encoded value.

To retrieve the raw value, extract the value and then base64 decode it.

$ curl --silent \
  --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
  --connect-to server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}:8443:consul-server-0:8443 \
  --cacert ~/assets/scenario/conf/secrets/consul-agent-ca.pem \
  https://server.${CONSUL_DATACENTER}.${CONSUL_DOMAIN}:8443/v1/kv/consul/configuration/db_port | \
  jq -r ".[].Value" | base64 --decode

The output is similar to the following:

Consul also provides you with a fully featured DNS server that you can use to resolve your services. By default, Consul DNS service is configured to listen on port 8600.

$ dig @consul-server-0 -p 8600 consul.service.consul 

; <<>> DiG 9.18.28-1~deb12u2-Debian <<>> @consul-server-0 -p 53 consul.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58457
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;consul.service.consul.     IN  A

;; ANSWER SECTION:
consul.service.consul.  0   IN  A   172.18.0.3

;; Query time: 0 msec
;; SERVER: 172.18.0.3#53(consul-server-0) (UDP)
;; WHEN: Tue Jan 21 11:31:33 UTC 2025
;; MSG SIZE  rcvd: 66

In this tutorial, you deployed a Consul server on a VM. After deploying Consul, you interacted with Consul using the CLI, API, and UI.

This deployment does not have Consul client agents running. Even when deployed without client agents, you can still:

If you want to stop at this tutorial, you can destroy the infrastructure now.

From the ./self-managed/infrastruture/aws folder of the repository, use terraform to destroy the infrastructure.

$ terraform destroy --auto-approve

In the next tutorial, you will deploy Consul clients on the VMs hosting your application. Then, you will register the services running on each server and set up health checks for each service. This enables service discovery using Consul's distributed health check system and DNS.

For more information about the topics covered in this tutorial, refer to the following resources:


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4