The HCP Terraform Operator for Kubernetes (Operator) allows you to manage the lifecycle of cloud and on-prem infrastructure through a single Kubernetes custom resource.
You can create application-related infrastructure from a Kubernetes cluster by adding the Operator to your Kubernetes namespace. The Operator uses a Kubernetes Custom Resource Definition (CRD) to manage HCP Terraform workspaces. These workspaces execute an HCP Terraform run to provision Terraform modules. By using HCP Terraform, the Operator leverages its proper state handling and locking, sequential execution of runs, and established patterns for injecting secrets and provisioning resources.
In this tutorial, you will configure and deploy the Operator to a Kubernetes cluster and use it to create an HCP Terraform workspace. You will also use the Operator to provision a message queue that the example application needs for deployment to Kubernetes.
The tutorial assumes some basic familiarity with Kubernetes and kubectl
.
You should also be familiar with:
For this tutorial, you will need:
An HCP Terraform account
An AWS account and AWS Access Credentials
Note
This tutorial will provision resources that qualify under the AWS free-tier. If your account doesn't qualify under the AWS free-tier, we're not responsible for any charges that you may incur.
kubectl
To install the kubectl
(Kubernetes CLI), follow these instructions or choose a package manager based on your operating system.
Use the package manager homebrew
to install kubectl
.
$ brew install kubernetes-cli
Use the package manager Chocolatey
to install kubectl
.
$ choco install kubernetes-cli
You will also need a sample kubectl
config. We recommend using kind
to provision a local Kubernetes cluster and using that config for this tutorial.
Use the package manager homebrew
to install kind.
Use the package manager Chocolatey
to install kind.
Then, create a kind Kubernetes cluster called terraform-learn
.
$ kind create cluster --name terraform-learn
Creating cluster "terraform-learn" ...
â Ensuring node image (kindest/node:v1.20.2) đŧ
â Preparing nodes đĻ
â Writing configuration đ
â Starting control-plane đšī¸
â Installing CNI đ
â Installing StorageClass đž
Set kubectl context to "kind-terraform-learn"
You can now use your cluster with:
kubectl cluster-info --context kind-terraform-learn
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community đ
Verify that your cluster exists by listing your kind clusters.
$ kind get clusters
terraform-learn
Then, point kubectl
to interact with this cluster.
$ kubectl cluster-info --context kind-terraform-learn
Kubernetes master is running at https://127.0.0.1:32769
KubeDNS is running at https://127.0.0.1:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
In your terminal, clone the Learn Terraform Kubernetes Operator repository.
$ git clone https://github.com/hashicorp-education/learn-terraform-kubernetes-operator
Navigate into the v1
directory in the repository.
$ cd learn-terraform-kubernetes-operator/v1
This repository contains the following files.
.
âââ aws-sqs-test
â âââ Dockerfile
â âââ message.sh
âââ operator
â âââ application.yml
â âââ configmap.yml
â âââ workspace.yml
âââ main.tf
âââ terraform.tfvars.example
âââ credentials.example
operator
directory contains the Kubernetes .yml
files that you will use to create an HCP Terraform workspace using the Operator.aws-sqs-test
directory contains the files that build the Docker image that tests the message queue. This is provided as reference only. You will use an image from DockerHub to test the message queue.The Operator must have access to HCP Terraform and your AWS account. It also needs to run in its own Kubernetes namespace. Below you will configure the Operator and deploy it into your Kubernetes cluster using a Terraform configuration that we have provided for you.
Configure HCP Terraform accessThe Operator must authenticate to HCP Terraform. To do this, you must create an HCP Terraform Team API token, then add it as a secret for the Operator to access.
First, sign into your HCP Terraform account, then select "Settings" -> "Teams".
If you are using a free tier, you will only find one team called "owners" that has full access to the API. Click on "owners".
If you are using a paid tier, you must grant a team access to "Manage Workspaces". Remember to click on "Update team organization access" to confirm the organization access.
Click on the API tokens option in the left navigation, and then choose the Team Tokens tab.
Click Create a team token. Under Team, choose your team name and choose an Expiration of 30 days. Click Create.
Click Copy token to copy the token string. Store this token in a secure place as HCP Terraform will not display it again. You will use this token in the next step.
Warning
The Team token has global privileges. Ensure that the Kubernetes cluster using this token has proper role-based access control to limit access to the secret, or store it in a secret manager with access control policies.
Copy the contents of credentials.example
into a new file named credentials
.
$ cp credentials.example credentials
Then replace TERRAFORM_CLOUD_API_TOKEN
with the HCP Terraform Teams token you previously created.
credentials app.terraform.io {
token = "TERRAFORM_CLOUD_API_TOKEN"
}
Explore Terraform configuration
The main.tf
file has Terraform configuration that will deploy the Operator into your Kubernetes cluster. It includes:
A Kubernetes Namespace. This is where you will deploy the Operator, Secrets, and Workspace custom resource.
resource "kubernetes_namespace" "edu" {
metadata {
name = "edu"
}
}
A terraformrc
generic secret for your TFC Teams token. This is the default secret name the Operator uses for your HCP Terraform credentials. The secret will contain the contents of your credentials
file.
resource "kubernetes_secret" "terraformrc" {
metadata {
name = "terraformrc"
namespace = kubernetes_namespace.edu.metadata[0].name
}
data = {
"credentials" = file("${path.cwd}/credentials")
}
}
A generic secret named workspacesecrets
containing your AWS credentials. In addition to the HCP Terraform Teams token, HCP Terraform needs your cloud provider credentials to create infrastructure. This configuration adds your credentials to the namespace, which will pass them to HCP Terraform. You will add the credential values as variables in a later step.
resource "kubernetes_secret" "workspacesecrets" {
metadata {
name = "workspacesecrets"
namespace = kubernetes_namespace.edu.metadata[0].name
}
data = {
"AWS_ACCESS_KEY_ID" = var.aws_access_key_id
"AWS_SECRET_ACCESS_KEY" = var.aws_secret_access_key
}
}
The Operator Helm Chart. This is the configuration for the Operator, which is dependent on the terraformrc
and workspacesecrets
secrets.
resource "helm_release" "operator" {
name = "terraform-operator"
repository = "https://helm.releases.hashicorp.com"
chart = "terraform"
namespace = kubernetes_namespace.edu.metadata[0].name
depends_on = [
kubernetes_secret.terraformrc,
kubernetes_secret.workspacesecrets
]
}
In order to use this configuration, you need to define the variables that authenticate to the kind
cluster and AWS.
Run the following command. It will generate a terraform.tfvars
file with your kind
cluster configuration.
$ kubectl config view --minify --flatten --context=kind-terraform-learn -o go-template-file=tfvars.gotemplate > terraform.tfvars
Open terraform.tfvars
and add your AWS credentials in aws_access_key_id
and aws_secret_access_key
respectively.
You should end up with something similar to the following.
host = "https://127.0.0.1:32768"
client_certificate = "LS0tLS1CRUdJTiB..."
client_key = "LS0tLS1CRUdJTiB..."
cluster_ca_certificate = "LS0tLS1CRUdJTiB..."
aws_access_key_id = "REDACTED"
aws_secret_access_key = "REDACTED"
Warning
Do not commit sensitive values into version control. The .gitignore
file found in this repository ignores all .tfvars
files. Include it in all of your future Terraform repositories.
Now that you have defined the variables, you are ready to create the Kubernetes resources.
Initialize your configuration.
Apply your configuration. Remember to confirm your apply with a yes
.
$ terraform apply
## ...
kubernetes_namespace.edu: Creating...
kubernetes_namespace.edu: Creation complete after 0s [id=edu]
kubernetes_secret.terraformrc: Creating...
kubernetes_secret.workspacesecrets: Creating...
kubernetes_secret.terraformrc: Creation complete after 0s [id=edu/terraformrc]
kubernetes_secret.workspacesecrets: Creation complete after 0s [id=edu/workspacesecrets]
helm_release.operator: Creating...
helm_release.operator: Still creating... [10s elapsed]
helm_release.operator: Creation complete after 14s [id=terraform-operator]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Create an environment variable named NAMESPACE
and set it to edu
.
The Operator runs as a pod in the namespace. Verify the pod is running.
$ kubectl get -n $NAMESPACE pod
NAME READY STATUS RESTARTS AGE
terraform-1613122278-terraform-sync-workspace-5c8695bf59-pgbpm 1/1 Running 0 108s
In addition to deploying the Operator, the Helm chart adds a Workspace custom resource definition to the cluster.
$ kubectl get crds
NAME CREATED AT
workspaces.app.terraform.io 2021-02-12T09:31:19Z
Now you are ready to create infrastructure using the Operator.
First, navigate to the operator
directory.
Open workspace.yml
, the workspace specification, and customize it with your HCP Terraform organization name. The workspace specification both creates an HCP Terraform workspace, and uses it to deploy your application's required infrastructure.
This workspace specification is equivalent to the following Terraform configuration.
module "queue" {
source = "terraform-aws-modules/sqs/aws"
version = "2.0.0"
name = var.name
fifo_queue = var.fifo_queue
}
You can find the following items in workspace.yml
, which you use to apply the Workspace custom resource to a Kubernetes cluster.
metadata.name
(in this case: edu-greetings
)
metadata:
name: greetings
ORGANIZATION_NAME
with your HCP Terraform organization name.
spec:
organization: ORGANIZATION_NAME
workspacesecrets
secrets to /tmp/secrets
.
spec:
organization: ORGANIZATION_NAME
secretsMountPath: '/tmp/secrets'
module:
source: 'terraform-aws-modules/sqs/aws'
version: '2.0.0'
variables:
- key: name
value: greetings
sensitive: false
environmentVariable: false
- key: AWS_DEFAULT_REGION
valueFrom:
configMapKeyRef:
name: aws-configuration
key: region
sensitive: false
environmentVariable: true
## ...
- key: AWS_ACCESS_KEY_ID
sensitive: true
environmentVariable: true
this_sqs_queue_id
to an output named url
.
outputs:
- key: url
moduleOutputName: this_sqs_queue_id
configmap.yml
In workspace.yml
, the AWS_DEFAULT_REGION
variable is defined by a ConfigMap named aws-configuration
.
Open configmap.yml
. Here you will find the specifications for the aws-configuration
ConfigMap.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-configuration
data:
region: us-east-1
Apply the ConfigMap specifications to the namespace.
$ kubectl apply -n $NAMESPACE -f configmap.yml
configmap/aws-configuration created
Then, apply the Workspace specifications to the namespace.
$ kubectl apply -n $NAMESPACE -f workspace.yml
workspace.app.terraform.io/greetings created
Debug the Operator by accessing its logs and checking if the workspace creation ran into any errors.
$ kubectl logs -n $NAMESPACE $(kubectl get pods -n $NAMESPACE --selector "component=sync-workspace" -o jsonpath="{.items[0].metadata.name}")
## ...
{"level":"info","ts":1613124305.9530287,"logger":"terraform-k8s","msg":"Run incomplete","Organization":"hashicorp-training","RunID":"run-xxxxxxxxxxxxxxxx","RunStatus":"applying"}
{"level":"info","ts":1613124306.7574627,"logger":"terraform-k8s","msg":"Checking outputs","Organization":"hashicorp-training","WorkspaceID":"ws-xxxxxxxxxxxxxxxx","RunID":"run-xxxxxxxxxxxxxxxx"}
{"level":"info","ts":1613124307.0337532,"logger":"terraform-k8s","msg":"Updated outputs","Organization":"hashicorp-training","WorkspaceID":"ws-xxxxxxxxxxxxxxxx"}
{"level":"info","ts":1613124307.0339234,"logger":"terraform-k8s","msg":"Updating secrets","name":"greetings-outputs"}
View the Terraform configuration uploaded to HCP Terraform. The Terraform configuration includes the module's source, version, and inputs.
$ kubectl describe -n $NAMESPACE configmap greetings
Name: greetings
Namespace: edu
Labels: <none>
Annotations: <none>
Data
====
terraform:
----
terraform {
backend "remote" {
organization = "hashicorp-training"
workspaces {
name = "edu-greetings"
}
}
}
variable "name" {}
variable "fifo_queue" {}
output "url" {
value = module.operator.this_sqs_queue_id
}
module "operator" {
source = "terraform-aws-modules/sqs/aws"
version = "2.0.0"
name = var.name
fifo_queue = var.fifo_queue
}
Events: <none>
Check the status of the workspace via kubectl or the HCP Terraform web UI to determine the run status, outputs, and run identifiers.
The Workspace custom resource reflects that the run was applied and updates its corresponding outputs in the status.
$ kubectl describe -n $NAMESPACE workspace greetings
Name: greetings
Namespace: edu
## ...
Status:
Config Version ID:
Outputs:
Key: url
Value: "https://sqs.us-east-1.amazonaws.com/656261198433/greetings"
Run ID: run-xxxxxxxxxxxxxxxx
Run Status: applied
Workspace ID: ws-xxxxxxxxxxxxxxxx
In addition to the workspace status, the Operator creates a Kubernetes Secret containing the outputs of the HCP Terraform workspace. The Secret is formatted <workspace_name>-outputs
.
$ kubectl describe -n $NAMESPACE secrets greetings-outputs
Name: greetings-outputs
Namespace: edu
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
url: 60 bytes
Verify message queue
Now that you have deployed the queue, you will now send and receive messages on the queue.
The application.yml
contains a spec that runs a containerized application in your kind
cluster. That app calls a script called message.sh
, which sends and receives messages from the queue, using the same AWS credentials that the Operator used.
To give the script access to the queue's location, the application.yml
spec creates a new environment variable named QUEUE_URL
, and sets it to the Kubernetes Secret containing the queue url from the HCP Terraform workspace output.
- name: QUEUE_URL
valueFrom:
secretKeyRef:
name: greetings-outputs
key: url
Tip
If you mount the Secret as a volume, rather than project it as an environment variable, you can update that Secret without redeploying the app.
Open aws-sqs-test/message.sh
. This bash script tests the message queue. To access the queue, it creates environment variables with your AWS credentials and the queue URL. Since HCP Terraform outputs from the Kubernetes Secret contain double quotes, the script strips the double quotes from the output (QUEUE_URL
) to ensure the script works as expected.
## ...
export SQS_URL=$(eval echo $QUEUE_URL | sed 's/"//g')
## ...
Deploy the job and examine the logs from the pod associated with the job.
$ kubectl apply -n $NAMESPACE -f application.yml
job.batch/greetings created
View the job's logs.
$ kubectl logs -n $NAMESPACE $(kubectl get pods -n $NAMESPACE --selector "app=greetings" -o jsonpath="{.items[0].metadata.name}")
https://sqs.us-east-1.amazonaws.com/REDACTED/greetings.fifo
sending a sdfgsdf message to queue https://sqs.us-east-1.amazonaws.com/REDACTED/greetings.fifo
{
"MD5OfMessageBody": "fc3ff98e8c6a0d3087d515c0473f8677",
"SequenceNumber": "xxxxxxxxxxxxxxxx",
"MessageId": "xxxxxxxxxxxxxxxx"
}
reading a message from queue https://sqs.us-east-1.amazonaws.com/656261198433/greetings.fifo
{
"Messages": [
{
"Body": "hello world!",
"ReceiptHandle": "xxxxxxxxxxxxxxxx",
"MD5OfBody": "fc3ff98e8c6a0d3087d515c0473f8677",
"MessageId": "xxxxxxxxxxxxxxxx"
}
]
}
Once your infrastructure is running, you can use the Operator to modify it. Update the workspace.yml
file to change the queue's name, and the type of the queue from FIFO to standard.
# workspace.yml
apiVersion: app.terraform.io/v1alpha1
kind: Workspace
metadata:
name: greetings
spec:
## ...
variables:
- key: name
- value: greetings.fifo
+ value: greetings
- key: fifo_queue
- value: "true"
+ value: "false"
## ...
Changing inline, non-sensitive variables, module source, and module version in the Kubernetes Workspace custom resource will trigger a new run in the HCP Terraform workspace. Changing sensitive variables or variables with ConfigMap references will not trigger updates or runs in HCP Terraform.
Apply the updated workspace configuration. The Terraform Operator retrieves the configuration update, pushes it to HCP Terraform, and executes a run.
$ kubectl apply -n $NAMESPACE -f workspace.yml
workspace.app.terraform.io/greetings configured
Examine the run for the workspace in the HCP Terraform UI. The plan indicates that HCP Terraform replaced the queue.
You can audit updates to the workspace from the Operator through HCP Terraform, which maintains a history of runs and the current state.
Now that you have created and modified an HCP Terraform workspace using the Operator, delete the workspace.
Delete workspaceDelete the Workspace custom resource.
$ kubectl delete -n $NAMESPACE workspace greetings
workspace.app.terraform.io "greetings" deleted
You may notice that the command hangs for a few minutes. This is because the Operator executes a finalizer, a pre-delete hook. It executes a terraform destroy
on workspace resources and deletes the workspace in HCP Terraform.
Once the finalizer completes, Kubernetes will delete the Workspace custom resource.
Delete resources andkind
cluster
Navigate to the v1
directory.
Destroy the namespace, secrets and the Operator. Remember to confirm the destroy with a yes.
Finally, delete the kind
cluster.
$ kind delete cluster --name terraform-learn
Deleting cluster "terraform-learn" ...
Congrats! You have configured and deployed the Operator to a Kubernetes namespace, explored the Workspace specification, and created a Terraform workspace using the Operator. In doing so, you deployed a message queue from kubectl
. This pattern can extend to other application infrastructure, such as DNS servers, databases, and identity and access management rules.
Visit the following resources to learn more about the HCP Terraform Operator for Kubernetes.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4