A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://developer.hashicorp.com/terraform/language/settings/backends/s3 below:

Backend Type: s3 | Terraform

Stores the state as a given key in a given bucket on Amazon S3. This backend also supports state locking which can be enabled by setting the use_lockfile argument to true.

Warning! It is highly recommended that you enable Bucket Versioning on the S3 bucket to allow for state recovery in the case of accidental deletions and human error.

terraform {
  backend "s3" {
    bucket = "mybucket"
    key    = "path/to/my/key"
    region = "us-east-1"
  }
}

This assumes we have a bucket created called mybucket. The Terraform state is written to the key path/to/my/key.

Note that for the access credentials we recommend using a partial configuration.

The S3 backend stores state data in an S3 object at the path set by the key parameter in the S3 bucket indicated by the bucket parameter. Using the example shown above, the state would be stored at the path path/to/my/key in the bucket mybucket.

When using workspaces, the state for the default workspace is stored at the location described above. Other workspaces are stored using the path <workspace_key_prefix>/<workspace_name>/<key>. The default workspace key prefix is env: and it can be configured using the parameter workspace_key_prefix. Using the example above, the state for the workspace development would be stored at the path env:/development/path/to/my/key.

State Locking

State locking is an opt-in feature of the S3 backend.

Locking can be enabled via S3 or DynamoDB. However, DynamoDB-based locking is deprecated and will be removed in a future minor version. To support migration from older versions of Terraform that only support DynamoDB-based locking, the S3 and DynamoDB arguments can be configured simultaneously.

Enabling S3 State Locking

To enable S3 state locking, use the following optional argument:

Enabling DynamoDB State Locking (Deprecated)

To enable DynamoDB state locking, use the following optional arguments:

S3 Bucket Permissions

When not using workspaces(or when only using the default workspace), Terraform will need the following AWS IAM permissions on the target backend bucket:

Note: If use_lockfile is set, s3:GetObject, s3:PutObject, and s3:DeleteObject are required on the lock file, e.g., arn:aws:s3:::mybucket/path/to/my/key.tflock.

Note: s3:DeleteObject is not required on the state file, as Terraform does not delete it.

This is seen in the following AWS IAM Statement:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::mybucket",
      "Condition": {
        "StringEquals": {
          "s3:prefix": "mybucket/path/to/my/key"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:PutObject"],
      "Resource": [
        "arn:aws:s3:::mybucket/path/to/my/key"
      ]
    },
    {
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
      "Resource": [
        "arn:aws:s3:::mybucket/path/to/my/key.tflock"
      ]
    }
  ]
}

When using workspaces, Terraform will also need permissions to create, list, read, update, and delete the workspace state file:

Note: If use_lockfile is set, s3:GetObject, s3:PutObject, and s3:DeleteObject are required on the lock file, e.g., arn:aws:s3:::mybucket/<workspace_key_prefix>/*/path/to/my/key.tflock.

Note: AWS can control access to S3 buckets with either IAM policies attached to users/groups/roles (like the example above) or resource policies attached to bucket objects (which look similar but also require a Principal to indicate which entity has those permissions). For more details, see Amazon's documentation about S3 access control.

DynamoDB Table Permissions

If you are using the deprecated DynamoDB-based locking mechanism, Terraform will need the following AWS IAM permissions on the DynamoDB table (arn:aws:dynamodb:::table/mytable):

This is seen in the following AWS IAM Statement:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:DescribeTable",
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:DeleteItem"
      ],
      "Resource": "arn:aws:dynamodb:*:*:table/mytable"
    }
  ]
}

To make use of the S3 remote state in another configuration, use the terraform_remote_state data source.

data "terraform_remote_state" "network" {
  backend = "s3"
  config = {
    bucket = "terraform-state-prod"
    key    = "network/terraform.tfstate"
    region = "us-east-1"
  }
}

The terraform_remote_state data source will return all of the root module outputs defined in the referenced remote state (but not any outputs from nested modules unless they are explicitly output again in the root). An example output might look like:

data.terraform_remote_state.network:
  id = 2016-10-29 01:57:59.780010914 +0000 UTC
  addresses.# = 2
  addresses.0 = 52.207.220.222
  addresses.1 = 54.196.78.166
  backend = s3
  config.% = 3
  config.bucket = terraform-state-prod
  config.key = network/terraform.tfstate
  config.region = us-east-1
  elb_address = web-elb-790251200.us-east-1.elb.amazonaws.com
  public_subnet_id = subnet-1e05dd33

This backend requires the configuration of the AWS Region and S3 state storage. Other configuration, such as enabling state locking, is optional.

Warning: We recommend using environment variables to supply credentials and other sensitive data. If you use -backend-config or hardcode these values directly in your configuration, Terraform will include these values in both the .terraform subdirectory and in plan files. Refer to Credentials and Sensitive Data for details.

The following configuration is required:

The following configuration is optional:

Overriding AWS API endpoints

The optional argument endpoints contains the following arguments:

The environment variable AWS_ENDPOINT_URL can be used to set a base endpoint URL for all services.

Endpoints can also be overridden using the AWS shared configuration file. Setting the parameter endpoint_url on a profile will set that endpoint for all services. To set endpoints for specific services, create a services section and set the endpoint_url parameters for each desired service. Endpoints set for specific services will override the base endpoint configured in the profile.

Assume Role Configuration

The argument assume_role contains the following arguments:

Multiple assume_role values can be specified, and the roles will be assumed in order.

terraform {
  backend "s3" {
    bucket = "example-bucket"
    key    = "path/to/state"
    region = "us-east-1"
    assume_role = {
      role_arn = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
    }
  }
}
Assume Role With Web Identity Configuration

The following assume_role_with_web_identity configuration block is optional:

terraform {
  backend "s3" {
    bucket = "example-bucket"
    key    = "path/to/state"
    region = "us-east-1"
    assume_role_with_web_identity = {
      role_arn           = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
      web_identity_token = "<token value>"
    }
  }
}
S3 State Storage

The following configuration is required:

The following configuration is optional:

A common architectural pattern is for an organization to use a number of separate AWS accounts to isolate different teams and environments. For example, a "staging" system will often be deployed into a separate AWS account than its corresponding "production" system, to minimize the risk of the staging environment affecting production infrastructure, whether via rate limiting, misconfigured access controls, or other unintended interactions.

The S3 backend can be used in a number of different ways that make different tradeoffs between convenience, security, and isolation in such an organization. This section describes one such approach that aims to find a good compromise between these tradeoffs, allowing use of Terraform's workspaces feature to switch conveniently between multiple isolated deployments of the same configuration.

Use this section as a starting-point for your approach, but note that you will probably need to make adjustments for the unique standards and regulations that apply to your organization. You will also need to make some adjustments to this approach to account for existing practices within your organization, if for example other tools have previously been used to manage infrastructure.

Terraform is an administrative tool that manages your infrastructure, and so ideally the infrastructure that is used by Terraform should exist outside of the infrastructure that Terraform manages. This can be achieved by creating a separate administrative AWS account which contains the user accounts used by human operators and any infrastructure and tools used to manage the other accounts. Isolating shared administrative tools from your main environments has a number of advantages, such as avoiding accidentally damaging the administrative infrastructure while changing the target infrastructure, and reducing the risk that an attacker might abuse production infrastructure to gain access to the (usually more privileged) administrative infrastructure.

Administrative Account Setup

Your administrative AWS account will contain at least the following items:

Provide the S3 bucket name to Terraform in the S3 backend configuration using the bucket argument. Set use_lockfile to true to enable state locking. Configure a suitable workspace_key_prefix to manage states of workspaces that will be created for this configuration.

Environment Account Setup

For the sake of this section, the term "environment account" refers to one of the accounts whose contents are managed by Terraform, separate from the administrative account described above.

Your environment accounts will eventually contain your own product-specific infrastructure. Along with this it must contain one or more IAM roles that grant sufficient access for Terraform to perform the desired management tasks.

Delegating Access

Each Administrator will run Terraform using credentials for their IAM user in the administrative account. IAM Role Delegation is used to grant these users access to the roles created in each environment account.

Full details on role delegation are covered in the AWS documentation linked above. The most important details are:

Since the purpose of the administrative account is only to host tools for managing other accounts, it is useful to give the administrative accounts restricted access only to the specific operations needed to assume the environment account role and access the Terraform state. By blocking all other access, you remove the risk that user error will lead to staging or production resources being created in the administrative account by mistake.

When configuring Terraform, use either environment variables or the standard credentials file ~/.aws/credentials to provide the administrator user's IAM credentials within the administrative account to both the S3 backend and to Terraform's AWS provider.

Use conditional configuration to pass a different assume_role value to the AWS provider depending on the selected workspace. For example:

variable "workspace_iam_roles" {
  default = {
    staging    = "arn:aws:iam::STAGING-ACCOUNT-ID:role/Terraform"
    production = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
  }
}

provider "aws" {
  # No credentials explicitly set here because they come from either the
  # environment or the global credentials file.

  assume_role = {
    role_arn = var.workspace_iam_roles[terraform.workspace]
  }
}

If workspace IAM roles are centrally managed and shared across many separate Terraform configurations, the role ARNs could also be obtained via a data source such as terraform_remote_state to avoid repeating these values.

Creating and Selecting Workspaces

With the necessary objects created and the backend configured, run terraform init to initialize the backend and establish an initial workspace called "default". This workspace will not be used, but is created automatically by Terraform as a convenience for users who are not using the workspaces feature.

Create a workspace corresponding to each key given in the workspace_iam_roles variable value above:

$ terraform workspace new staging
Created and switched to workspace "staging"!
 
...
 
$ terraform workspace new production
Created and switched to workspace "production"!
 
...

Due to the assume_role setting in the AWS provider configuration, any management operations for AWS resources will be performed via the configured role in the appropriate environment AWS account. The backend operations, such as reading and writing the state from S3, will be performed directly as the administrator's own user within the administrative account.

$ terraform workspace select staging
$ terraform apply
...
Running Terraform in Amazon EC2

Teams that make extensive use of Terraform for infrastructure management often run Terraform in automation to ensure a consistent operating environment and to limit access to the various secrets and other sensitive information that Terraform configurations tend to require.

When running Terraform in an automation tool running on an Amazon EC2 instance, consider running this instance in the administrative account and using an instance profile in place of the various administrator IAM users suggested above. An IAM instance profile can also be granted cross-account delegation access via an IAM policy, giving this instance the access it needs to run Terraform.

To isolate access to different environment accounts, use a separate EC2 instance for each target account so that its access can be limited only to the single account.

Similar approaches can be taken with equivalent features in other AWS compute services, such as ECS.

Protecting Access to Workspace State

In a simple implementation of the pattern described earlier,
all users can read and write states for all workspaces.
In many cases, it is desirable to apply precise access controls
to the Terraform state objects stored in S3. For example, only
trusted administrators should modify the production state.
It is also important to control access to reading the state file.
If state locking is enabled, the lock file (<key>.tflock)
must also be included in the access controls.

Amazon S3 supports fine-grained access control on a per-object-path basis using IAM policy. A full description of S3's access control mechanism is beyond the scope of this guide, but an example IAM policy granting access to only a single state object within an S3 bucket is shown below:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::example-bucket",
      "Condition": {
        "StringEquals": {
          "s3:prefix": "path/to/state"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:PutObject"],
      "Resource": [
        "arn:aws:s3:::example-bucket/myapp/production/tfstate",
      ]
    },
    {
      "Effect": "Allow",
      "Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
      "Resource": [
        "arn:aws:s3:::example-bucket/myapp/production/tfstate.tflock"
      ]
    }
  ]
}

The example backend configuration below documents the corresponding bucket, key and use_lockfile arguments:

terraform {
  backend "s3" {
    bucket       = "example-bucket"
    key          = "path/to/state"
    use_lockfile = true
    region       = "us-east-1"
  }
}

Refer to the AWS documentation on S3 access control for more details.

Configuring Custom User-Agent Information

Note this feature is optional and only available in Terraform v0.13.1+.

By default, the underlying AWS client used by the Terraform AWS Provider creates requests with User-Agent headers including information about Terraform and AWS Go SDK versions. To provide additional information in the User-Agent headers, the TF_APPEND_USER_AGENT environment variable can be set and its value will be directly added to HTTP requests. e.g.

$ export TF_APPEND_USER_AGENT="JenkinsAgent/i-12345678 BuildID/1234 (Optional Extra Information)"

Support for S3 Compatible storage providers is offered as “best effort”. HashiCorp only tests the s3 backend against Amazon S3, so cannot offer any guarantees when using an alternate provider.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4