Stores the state as a given key in a given bucket on Amazon S3. This backend also supports state locking and consistency checking via Dynamo DB, which can be enabled by setting the dynamodb_table
field to an existing DynamoDB table name. A single DynamoDB table can be used to lock multiple remote state files. Terraform generates key names that include the values of the bucket
and key
variables.
Warning! It is highly recommended that you enable Bucket Versioning on the S3 bucket to allow for state recovery in the case of accidental deletions and human error.
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
This assumes we have a bucket created called mybucket
. The Terraform state is written to the key path/to/my/key
.
Note that for the access credentials we recommend using a partial configuration.
S3 Bucket PermissionsTerraform will need the following AWS IAM permissions on the target backend bucket:
s3:ListBucket
on arn:aws:s3:::mybucket
s3:GetObject
on arn:aws:s3:::mybucket/path/to/my/key
s3:PutObject
on arn:aws:s3:::mybucket/path/to/my/key
s3:DeleteObject
on arn:aws:s3:::mybucket/path/to/my/key
This is seen in the following AWS IAM Statement:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
"Resource": "arn:aws:s3:::mybucket/path/to/my/key"
}
]
}
Note: AWS can control access to S3 buckets with either IAM policies attached to users/groups/roles (like the example above) or resource policies attached to bucket objects (which look similar but also require a Principal
to indicate which entity has those permissions). For more details, see Amazon's documentation about S3 access control.
If you are using state locking, Terraform will need the following AWS IAM permissions on the DynamoDB table (arn:aws:dynamodb:::table/mytable
):
This is seen in the following AWS IAM Statement:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/mytable"
}
]
}
To make use of the S3 remote state in another configuration, use the terraform_remote_state
data source.
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "terraform-state-prod"
key = "network/terraform.tfstate"
region = "us-east-1"
}
}
The terraform_remote_state
data source will return all of the root module outputs defined in the referenced remote state (but not any outputs from nested modules unless they are explicitly output again in the root). An example output might look like:
data.terraform_remote_state.network:
id = 2016-10-29 01:57:59.780010914 +0000 UTC
addresses.# = 2
addresses.0 = 52.207.220.222
addresses.1 = 54.196.78.166
backend = s3
config.% = 3
config.bucket = terraform-state-prod
config.key = network/terraform.tfstate
config.region = us-east-1
elb_address = web-elb-790251200.us-east-1.elb.amazonaws.com
public_subnet_id = subnet-1e05dd33
This backend requires the configuration of the AWS Region and S3 state storage. Other configuration, such as enabling DynamoDB state locking, is optional.
Warning: We recommend using environment variables to supply credentials and other sensitive data. If you use -backend-config
or hardcode these values directly in your configuration, Terraform will include these values in both the .terraform
subdirectory and in plan files. Refer to Credentials and Sensitive Data for details.
The following configuration is required:
region
- (Required) AWS Region of the S3 Bucket and DynamoDB Table (if used). This can also be sourced from the AWS_DEFAULT_REGION
and AWS_REGION
environment variables.The following configuration is optional:
access_key
- (Optional) AWS access key. If configured, must also configure secret_key
. This can also be sourced from the AWS_ACCESS_KEY_ID
environment variable, AWS shared credentials file (e.g. ~/.aws/credentials
), or AWS shared configuration file (e.g. ~/.aws/config
).allowed_account_ids
- (Optional) List of allowed AWS account IDs to prevent potential destruction of a live environment. Conflicts with forbidden_account_ids
.custom_ca_bundle
- (Optional) File containing custom root and intermediate certificates. Can also be set using the AWS_CA_BUNDLE
environment variable. Setting ca_bundle in the shared config file is not supported.ec2_metadata_service_endpoint
- (Optional) Custom endpoint URL for the EC2 Instance Metadata Service (IMDS) API. Can also be set with the AWS_EC2_METADATA_SERVICE_ENDPOINT
environment variable.ec2_metadata_service_endpoint_mode
- (Optional) Mode to use in communicating with the metadata service. Valid values are IPv4
and IPv6
. Can also be set with the AWS_EC2_METADATA_SERVICE_ENDPOINT_MODE
environment variable.forbidden_account_ids
- (Optional) List of forbidden AWS account IDs to prevent potential destruction of a live environment. Conflicts with allowed_account_ids
.http_proxy
- (Optional) URL of a proxy to use for HTTP requests when accessing the AWS API. Can also be set using the HTTP_PROXY
or http_proxy
environment variables.https_proxy
- (Optional) URL of a proxy to use for HTTPS requests when accessing the AWS API. Can also be set using the HTTPS_PROXY
or https_proxy
environment variables.iam_endpoint
- (Optional, Deprecated) Custom endpoint URL for the AWS Identity and Access Management (IAM) API. Use endpoints.iam
instead.insecure
- (Optional) Whether to explicitly allow the backend to perform "insecure" SSL requests. If omitted, the default value is false
.no_proxy
- (Optional) Comma-separated list of hosts that should not use HTTP or HTTPS proxies. Each value can be one of:
*
), to indicate that no proxying should be performed Domain name and IP address values can also include a port number. Can also be set using the NO_PROXY
or no_proxy
environment variables.max_retries
- (Optional) The maximum number of times an AWS API request is retried on retryable failure. Defaults to 5.profile
- (Optional) Name of AWS profile in AWS shared credentials file (e.g. ~/.aws/credentials
) or AWS shared configuration file (e.g. ~/.aws/config
) to use for credentials and/or configuration. This can also be sourced from the AWS_PROFILE
environment variable.retry_mode
- (Optional) Specifies how retries are attempted. Valid values are standard
and adaptive
. Can also be configured using the AWS_RETRY_MODE
environment variable or the shared config file parameter retry_mode
.secret_key
- (Optional) AWS access key. If configured, must also configure access_key
. This can also be sourced from the AWS_SECRET_ACCESS_KEY
environment variable, AWS shared credentials file (e.g. ~/.aws/credentials
), or AWS shared configuration file (e.g. ~/.aws/config
).shared_config_files
- (Optional) List of paths to AWS shared configuration files. Defaults to ~/.aws/config
.shared_credentials_file
- (Optional, Deprecated, use shared_credentials_files
instead) Path to the AWS shared credentials file. Defaults to ~/.aws/credentials
.shared_credentials_files
- (Optional) List of paths to AWS shared credentials files. Defaults to ~/.aws/credentials
.skip_credentials_validation
- (Optional) Skip credentials validation via the STS API. Useful for testing and for AWS API implementations that do not have STS available.skip_region_validation
- (Optional) Skip validation of provided region name.skip_requesting_account_id
- (Optional) Whether to skip requesting the account ID. Useful for AWS API implementations that do not have the IAM, STS API, or metadata API.skip_metadata_api_check
- (Optional) Skip usage of EC2 Metadata API.skip_s3_checksum
- (Optional) Do not include checksum when uploading S3 Objects. Useful for some S3-Compatible APIs.sts_endpoint
- (Optional, Deprecated) Custom endpoint URL for the AWS Security Token Service (STS) API. Use endpoints.sts
instead.sts_region
- (Optional) AWS region for STS. If unset, AWS will use the same region for STS as other non-STS operations.token
- (Optional) Multi-Factor Authentication (MFA) token. This can also be sourced from the AWS_SESSION_TOKEN
environment variable.use_dualstack_endpoint
- (Optional) Force the backend to resolve endpoints with DualStack capability. Can also be set with the AWS_USE_DUALSTACK_ENDPOINT
environment variable or in a shared config file (use_dualstack_endpoint
).use_fips_endpoint
- (Optional) Force the backend to resolve endpoints with FIPS capability. Can also be set with the AWS_USE_FIPS_ENDPOINT
environment variable or in a shared config file (use_fips_endpoint
).use_legacy_workflow
- (Optional) Use the legacy authentication workflow, preferring environment variables over backend configuration. Defaults to true
. This behavior does not align with the authentication flow of the AWS CLI or SDK's, and will be removed in the future.The optional argument endpoints
contains the following arguments:
dynamodb
- (Optional) Custom endpoint URL for the AWS DynamoDB API. This can also be sourced from the environment variable AWS_ENDPOINT_URL_DYNAMODB
or the deprecated environment variable AWS_DYNAMODB_ENDPOINT
.iam
- (Optional) Custom endpoint URL for the AWS IAM API. This can also be sourced from the environment variable AWS_ENDPOINT_URL_IAM
or the deprecated environment variable AWS_IAM_ENDPOINT
.s3
- (Optional) Custom endpoint URL for the AWS S3 API. This can also be sourced from the environment variable AWS_ENDPOINT_URL_S3
or the deprecated environment variable AWS_S3_ENDPOINT
.sso
- (Optional) Custom endpoint URL for the AWS IAM Identity Center (formerly known as AWS SSO) API. This can also be sourced from the environment variable AWS_ENDPOINT_URL_SSO
.sts
- (Optional) Custom endpoint URL for the AWS STS API. This can also be sourced from the environment variable AWS_ENDPOINT_URL_STS
or the deprecated environment variable AWS_STS_ENDPOINT
.The environment variable AWS_ENDPOINT_URL
can be used to set a base endpoint URL for all services.
Endpoints can also be overridden using the AWS shared configuration file. Setting the parameter endpoint_url
on a profile will set that endpoint for all services. To set endpoints for specific services, create a services
section and set the endpoint_url
parameters for each desired service. Endpoints set for specific services will override the base endpoint configured in the profile.
Assuming an IAM Role can be configured in two ways. The preferred way is to use the argument assume_role
, the other, which is deprecated, is with arguments at the top level.
The argument assume_role
contains the following arguments:
role_arn
- (Required) Amazon Resource Name (ARN) of the IAM Role to assume.duration
- (Optional) The duration individual credentials will be valid. Credentials are automatically renewed up to the maximum defined by the AWS account. Specified using the format <hours>h<minutes>m<seconds>s
with any unit being optional. For example, an hour and a half can be specified as 1h30m
or 90m
. Must be between 15 minutes (15m) and 12 hours (12h).external_id
- (Optional) External identifier to use when assuming the role.policy
- (Optional) IAM Policy JSON describing further restricting permissions for the IAM Role being assumed.policy_arns
- (Optional) Set of Amazon Resource Names (ARNs) of IAM Policies describing further restricting permissions for the IAM Role being assumed.session_name
- (Optional) Session name to use when assuming the role.source_identity
- (Optional) Source identity specified by the principal assuming the role.tags
- (Optional) Map of assume role session tags.transitive_tag_keys
- (Optional) Set of assume role session tag keys to pass to any subsequent sessions.The following arguments on the top level are deprecated:
assume_role_duration_seconds
- (Optional) Number of seconds to restrict the assume role session duration. Use assume_role.duration
instead.assume_role_policy
- (Optional) IAM Policy JSON describing further restricting permissions for the IAM Role being assumed. Use assume_role.policy
instead.assume_role_policy_arns
- (Optional) Set of Amazon Resource Names (ARNs) of IAM Policies describing further restricting permissions for the IAM Role being assumed. Use assume_role.policy_arns
instead.assume_role_tags
- (Optional) Map of assume role session tags. Use assume_role.tags
instead.assume_role_transitive_tag_keys
- (Optional) Set of assume role session tag keys to pass to any subsequent sessions. Use assume_role.transitive_tag_keys
instead.external_id
- (Optional) External identifier to use when assuming the role. Use assume_role.external_id
instead.role_arn
- (Optional) Amazon Resource Name (ARN) of the IAM Role to assume. Use assume_role.role_arn
instead.session_name
- (Optional) Session name to use when assuming the role. Use assume_role.session_name
instead.terraform {
backend "s3" {
bucket = "terraform-state-prod"
key = "network/terraform.tfstate"
region = "us-east-1"
assume_role = {
role_arn = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
}
}
}
Assume Role With Web Identity Configuration
The following assume_role_with_web_identity
configuration block is optional:
role_arn
- (Required) Amazon Resource Name (ARN) of the IAM Role to assume. Can also be set with the AWS_ROLE_ARN
environment variable.duration
- (Optional) The duration individual credentials will be valid. Credentials are automatically renewed up to the maximum defined by the AWS account. Specified using the format <hours>h<minutes>m<seconds>s
with any unit being optional. For example, an hour and a half can be specified as 1h30m
or 90m
. Must be between 15 minutes (15m) and 12 hours (12h).policy
- (Optional) IAM Policy JSON describing further restricting permissions for the IAM Role being assumed.policy_arns
- (Optional) Set of Amazon Resource Names (ARNs) of IAM Policies describing further restricting permissions for the IAM Role being assumed.session_name
- (Optional) Session name to use when assuming the role. Can also be set with the AWS_ROLE_SESSION_NAME
environment variable.web_identity_token
- (Optional) The value of a web identity token from an OpenID Connect (OIDC) or OAuth provider. One of web_identity_token
or web_identity_token_file
is required.web_identity_token_file
- (Optional) File containing a web identity token from an OpenID Connect (OIDC) or OAuth provider. One of web_identity_token_file
or web_identity_token
is required. Can also be set with the AWS_WEB_IDENTITY_TOKEN_FILE
environment variable.terraform {
backend "s3" {
bucket = "terraform-state-prod"
key = "network/terraform.tfstate"
region = "us-east-1"
assume_role_with_web_identity = {
role_arn = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
web_identity_token = "<token value>"
}
}
}
S3 State Storage
The following configuration is required:
bucket
- (Required) Name of the S3 Bucket.key
- (Required) Path to the state file inside the S3 Bucket. When using a non-default workspace, the state path will be /workspace_key_prefix/workspace_name/key
(see also the workspace_key_prefix
configuration).The following configuration is optional:
acl
- (Optional) Canned ACL to be applied to the state file.encrypt
- (Optional) Enable server side encryption of the state file.endpoint
- (Optional, Deprecated) Custom endpoint URL for the AWS S3 API. Use endpoints.s3
instead.force_path_style
- (Optional, Deprecated) Enable path-style S3 URLs (https://<HOST>/<BUCKET>
instead of https://<BUCKET>.<HOST>
).kms_key_id
- (Optional) Amazon Resource Name (ARN) of a Key Management Service (KMS) Key to use for encrypting the state. Note that if this value is specified, Terraform will need kms:Encrypt
, kms:Decrypt
and kms:GenerateDataKey
permissions on this KMS key.sse_customer_key
- (Optional) The key to use for encrypting state with Server-Side Encryption with Customer-Provided Keys (SSE-C). This is the base64-encoded value of the key, which must decode to 256 bits. This can also be sourced from the AWS_SSE_CUSTOMER_KEY
environment variable, which is recommended due to the sensitivity of the value. Setting it inside a terraform file will cause it to be persisted to disk in terraform.tfstate
.use_path_style
- (Optional) Enable path-style S3 URLs (https://<HOST>/<BUCKET>
instead of https://<BUCKET>.<HOST>
).workspace_key_prefix
- (Optional) Prefix applied to the state path inside the bucket. This is only relevant when using a non-default workspace. Defaults to env:
.The following configuration is optional:
dynamodb_endpoint
- (Optional, Deprecated) Custom endpoint URL for the AWS DynamoDB API. Use endpoints.dynamodb
instead.dynamodb_table
- (Optional) Name of DynamoDB Table to use for state locking and consistency. The table must have a partition key named LockID
with type of String
. If not configured, state locking will be disabled.A common architectural pattern is for an organization to use a number of separate AWS accounts to isolate different teams and environments. For example, a "staging" system will often be deployed into a separate AWS account than its corresponding "production" system, to minimize the risk of the staging environment affecting production infrastructure, whether via rate limiting, misconfigured access controls, or other unintended interactions.
The S3 backend can be used in a number of different ways that make different tradeoffs between convenience, security, and isolation in such an organization. This section describes one such approach that aims to find a good compromise between these tradeoffs, allowing use of Terraform's workspaces feature to switch conveniently between multiple isolated deployments of the same configuration.
Use this section as a starting-point for your approach, but note that you will probably need to make adjustments for the unique standards and regulations that apply to your organization. You will also need to make some adjustments to this approach to account for existing practices within your organization, if for example other tools have previously been used to manage infrastructure.
Terraform is an administrative tool that manages your infrastructure, and so ideally the infrastructure that is used by Terraform should exist outside of the infrastructure that Terraform manages. This can be achieved by creating a separate administrative AWS account which contains the user accounts used by human operators and any infrastructure and tools used to manage the other accounts. Isolating shared administrative tools from your main environments has a number of advantages, such as avoiding accidentally damaging the administrative infrastructure while changing the target infrastructure, and reducing the risk that an attacker might abuse production infrastructure to gain access to the (usually more privileged) administrative infrastructure.
Administrative Account SetupYour administrative AWS account will contain at least the following items:
Provide the S3 bucket name and DynamoDB table name to Terraform within the S3 backend configuration using the bucket
and dynamodb_table
arguments respectively, and configure a suitable workspace_key_prefix
to contain the states of the various workspaces that will subsequently be created for this configuration.
For the sake of this section, the term "environment account" refers to one of the accounts whose contents are managed by Terraform, separate from the administrative account described above.
Your environment accounts will eventually contain your own product-specific infrastructure. Along with this it must contain one or more IAM roles that grant sufficient access for Terraform to perform the desired management tasks.
Delegating AccessEach Administrator will run Terraform using credentials for their IAM user in the administrative account. IAM Role Delegation is used to grant these users access to the roles created in each environment account.
Full details on role delegation are covered in the AWS documentation linked above. The most important details are:
Since the purpose of the administrative account is only to host tools for managing other accounts, it is useful to give the administrative accounts restricted access only to the specific operations needed to assume the environment account role and access the Terraform state. By blocking all other access, you remove the risk that user error will lead to staging or production resources being created in the administrative account by mistake.
When configuring Terraform, use either environment variables or the standard credentials file ~/.aws/credentials
to provide the administrator user's IAM credentials within the administrative account to both the S3 backend and to Terraform's AWS provider.
Use conditional configuration to pass a different assume_role
value to the AWS provider depending on the selected workspace. For example:
variable "workspace_iam_roles" {
default = {
staging = "arn:aws:iam::STAGING-ACCOUNT-ID:role/Terraform"
production = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/Terraform"
}
}
provider "aws" {
# No credentials explicitly set here because they come from either the
# environment or the global credentials file.
assume_role = {
role_arn = "${var.workspace_iam_roles[terraform.workspace]}"
}
}
If workspace IAM roles are centrally managed and shared across many separate Terraform configurations, the role ARNs could also be obtained via a data source such as terraform_remote_state
to avoid repeating these values.
With the necessary objects created and the backend configured, run terraform init
to initialize the backend and establish an initial workspace called "default". This workspace will not be used, but is created automatically by Terraform as a convenience for users who are not using the workspaces feature.
Create a workspace corresponding to each key given in the workspace_iam_roles
variable value above:
$ terraform workspace new staging
Created and switched to workspace "staging"!
...
$ terraform workspace new production
Created and switched to workspace "production"!
...
Due to the assume_role
setting in the AWS provider configuration, any management operations for AWS resources will be performed via the configured role in the appropriate environment AWS account. The backend operations, such as reading and writing the state from S3, will be performed directly as the administrator's own user within the administrative account.
$ terraform workspace select staging
$ terraform apply
...
Running Terraform in Amazon EC2
Teams that make extensive use of Terraform for infrastructure management often run Terraform in automation to ensure a consistent operating environment and to limit access to the various secrets and other sensitive information that Terraform configurations tend to require.
When running Terraform in an automation tool running on an Amazon EC2 instance, consider running this instance in the administrative account and using an instance profile in place of the various administrator IAM users suggested above. An IAM instance profile can also be granted cross-account delegation access via an IAM policy, giving this instance the access it needs to run Terraform.
To isolate access to different environment accounts, use a separate EC2 instance for each target account so that its access can be limited only to the single account.
Similar approaches can be taken with equivalent features in other AWS compute services, such as ECS.
Protecting Access to Workspace StateIn a simple implementation of the pattern described in the prior sections, all users have access to read and write states for all workspaces. In many cases it is desirable to apply more precise access constraints to the Terraform state objects in S3, so that for example only trusted administrators are allowed to modify the production state, or to control reading of a state that contains sensitive information.
Amazon S3 supports fine-grained access control on a per-object-path basis using IAM policy. A full description of S3's access control mechanism is beyond the scope of this guide, but an example IAM policy granting access to only a single state object within an S3 bucket is shown below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::myorg-terraform-states"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::myorg-terraform-states/myapp/production/tfstate"
}
]
}
The example backend configuration below documents the corresponding bucket
and key
arguments:
terraform {
backend "s3" {
bucket = "myorg-terraform-states"
key = "myapp/production/tfstate"
region = "us-east-1"
}
}
Refer to the AWS documentation on S3 access control for more details.
DynamoDB does not assign a separate resource ARN to each key in a table, but you can write more precise policies for a DynamoDB table using an IAM Condition
element. For example, you can use the dynamodb:LeadingKeys
condition key to match on the partition key values that the S3 backend will use:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem"
],
"Resource": "arn:aws:dynamodb:us-east-1:12341234:table/TableName",
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"TableName/myapp/production/tfstate",
"TableName/myapp/production/tfstate-md5"
]
}
}
}
]
}
Note that DynamoDB ARNs are regional and account-specific, unlike S3 bucket ARNs, so you must also specify the correct region and AWS account ID for your DynamoDB table in the Resource
element.
The example backend configuration below documents the corresponding arguments:
terraform {
backend "s3" {
bucket = "myorg-terraform-states"
key = "myapp/production/tfstate"
region = "us-east-1"
dynamodb_table = "TableName
}
}
Refer to the AWS documentation on DynamoDB fine-grained locking for more details.
Configuring Custom User-Agent InformationNote this feature is optional and only available in Terraform v0.13.1+.
By default, the underlying AWS client used by the Terraform AWS Provider creates requests with User-Agent headers including information about Terraform and AWS Go SDK versions. To provide additional information in the User-Agent headers, the TF_APPEND_USER_AGENT
environment variable can be set and its value will be directly added to HTTP requests. e.g.
$ export TF_APPEND_USER_AGENT="JenkinsAgent/i-12345678 BuildID/1234 (Optional Extra Information)"
Support for S3 Compatible storage providers is offered as “best effort”. HashiCorp only tests the s3
backend against Amazon S3, so cannot offer any guarantees when using an alternate provider.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4