A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://developer.hashicorp.com/terraform/enterprise/deploy/manage/monitor below:

Monitor Terraform Enterprise deployments | Terraform

This topic describes how to enable logs and metrics in Terraform Enterprise so that you can monitor your non-Replicated deployment. For information about monitoring Replicated deployments, refer to Terraform Enterprise Log Forwarding and Monitoring a Terraform Enterprise Instance in the Replicated administration section.

Complete the following steps to monitor your Terraform Enterprise deployment:

  1. Enable log forwarding. Terraform Enterprise writes logs directly to standard output and standard error, which allows you to forward logs using native tooling for your deployment platform.
  2. Enable metrics collection. Metrics collection is disabled by default. Update your deployment configuration file to enable metrics collection

Terraform Enterprise writes logs directly to standard output and standard error. This allows you to forward logs using native tooling for your deployment platform. Terraform Enterprise logs directly to standard output and standard error. This allows you to forward logs using native tooling for your deployment platform.

Log location and format

The individual service logs are located within the /var/log/terraform-enterprise directory inside the container.

/var/log/terraform-enterprise
├── atlas.log
├── nginx.log
├── sidekiq.log
└── vault.log

Each service log is a plain text file containing the logs for that service. Logs are collated and logged to the container's standard output in JSON format. Each log entry contains two fields:

An example set of log entries emitted by a Terraform Enterprise container would appear as follows:

{"log":"2023-09-18 02:39:05 [INFO] msg=Worker start worker=AuthenticationTokenDeletionWorker","component":"sidekiq"}
{"log":"2023-09-18T02:39:05.098Z pid=156 tid=2pos class=FailedJobWorker jid=1010d28ac591979d9decb61f INFO: start","component":"sidekiq"}
{"log":"2023-09-18 02:39:05 [INFO] msg=Worker start worker=FailedJobWorker","component":"sidekiq"}
{"log":"2023-09-18 02:39:05 [INFO] msg=Worker finish worker=AuthenticationTokenDeletionWorker","component":"sidekiq"}
{"log":"2023-09-18T02:39:05.114Z pid=156 tid=2pyc class=AuthenticationTokenDeletionWorker jid=515e8a727a3e4948e9dbb04a elapsed=0.034 INFO: done","component":"sidekiq"}
{"log":"2023-09-18 02:39:05 [INFO] agent_jobs_processed=[] agent_jobs_errored=[] msg=Worker finish worker=FailedJobWorker","component":"sidekiq"}
{"log":"2023-09-18T02:39:05.118Z pid=156 tid=2pos class=FailedJobWorker jid=1010d28ac591979d9decb61f queue=default elapsed=0.02 INFO: done","component":"sidekiq"}
{"log":"2023-09-18 02:39:13 [INFO] [3efaaec9-48d4-4517-9fde-127f80faacb4] [dd.service=atlas dd.trace_id=1904097642804464614 dd.span_id=0 ddsource=ruby] {\"method\":\"GET\",\"path\":\"/\",\"format\":\"html\",\"status\":301,\"allocations\":493,\"duration\":0.72,\"view\":0.0,\"db\":0.0,\"location\":\"https://tfe.example.com/session\",\"dd\":{\"trace_id\":\"1904097642804464614\",\"span_id\":\"0\",\"env\":\"\",\"service\":\"atlas\",\"version\":\"\"},\"ddsource\":[\"ruby\"],\"uuid\":\"3efaaec9-48d4-4517-9fde-127f80faacb4\",\"remote_ip\":\"1.2.3.4\",\"request_id\":\"3efaaec9-48d4-4517-9fde-127f80faacb4\",\"user_agent\":\"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36\",\"user\":null,\"auth_source\":null}","component":"atlas"}
{"log":"2023-09-18 02:39:13 [INFO] [3cb89cfa-7d7f-4aeb-9e60-2256b016a839] [dd.service=atlas dd.trace_id=4370203755142829190 dd.span_id=0 ddsource=ruby] {\"method\":\"GET\",\"path\":\"/session\",\"format\":\"html\",\"status\":200,\"allocations\":3895,\"duration\":7.3,\"view\":5.77,\"db\":0.59,\"dd\":{\"trace_id\":\"4370203755142829190\",\"span_id\":\"0\",\"env\":\"\",\"service\":\"atlas\",\"version\":\"\"},\"ddsource\":[\"ruby\"],\"uuid\":\"3cb89cfa-7d7f-4aeb-9e60-2256b016a839\",\"remote_ip\":\"1.2.3.4\",\"request_id\":\"3cb89cfa-7d7f-4aeb-9e60-2256b016a839\",\"user_agent\":\"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36\",\"user\":null,\"auth_source\":null}","component":"atlas"}
{"log":"1.2.3.4 - - [18/Sep/2023:02:39:13 +0000] \"GET / HTTP/1.1\" 301 117 \"-\" \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36\"","component":"nginx"}
{"log":"1.2.3.4 - - [18/Sep/2023:02:39:13 +0000] \"GET /session HTTP/1.1\" 200 1735 \"-\" \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36\"","component":"nginx"}
{"log":"Storing the encrypted Vault token in Redis","component":"vault"}

Note that the format of individual service logs is considered an internal implementation detail and is subject to change at any release.

External log forwarding

We strongly recommend using an external log forwarding solution that aligns with your existing observability solutions. Depending on the deployment platform, native or third-party solutions (e.g., host-level monitoring agents) may be an appropriate solution for log aggregation and forwarding. Hashicorp does not provide support for third-party log forwarding solutions.

Docker

Docker supports a multitude of logging drivers. See the Docker logging driver list for what options are available.

Kubernetes

Kubernetes supports several architectures for log-forwarding. See the Kubernetes logging architectures documentation for what options are available.

Native log forwarding

As a convenience to aid in migrating from legacy Replicated environments to Flexible Deployments, Terraform Enterprise provides a mechanism to inject FluentBit [OUTPUT] configuration directives. This allows Terraform Enterprise to use FluentBit plugins to forward log data directly to a number of external destinations.

FluentBit configuration must be provided the Terraform Enterprise container in a file mounted to the container. That is, the configuration value must point to a filesystem path on the Docker container where the FluentBit configuration is located; the configuration must not contain the actual configuration itself. This means it is the responsibility of the Terraform Enterprise operator to mount the config snippet to the Docker container.

Key Description Specific Format Required TFE_LOG_FORWARDING_CONFIG_PATH Filesystem path on the Terraform Enterprise container containing FluentBit [OUTPUT] configuration Yes, string.

Future deprecation

Exposing FluentBit configuration to Terraform Enterprise operators is provided as a convenience to facilitate migration Terraform Enterprise installations. Customers are encouraged to migrate away from relying on injected FluentBit configuration, and provide their own log forwarding and aggregation solution in their infrastructure.
Limitations

The FluentBit solution provided in legacy Replicated Terraform Enterprise deployments emitted log entries that contained additional metadata keys, such as hostname and IP address. This allowed for additional observability value from log entries, as operators could identify the source of log entries. Unlike Replicated deployments, logs emitted by the FluentBit plugins made available in Terraform Enterprise Flexible Deployments do not contain additional metadata attached to each log entry. This is due to the isolated nature of the FluentBit process within the Terraform Enterprise Docker container; by definition, processes within the Docker container are not exposed to host-level details.

Because of this, we strongly recommend using an external log forwarding solution that aligns with your existing observability solutions. See external log forwarding for further discussion.

Additionally, note that built-in log forwarding is only available for Docker-deployed Terraform Enterprise installations. Terraform Enterprise deployed on Kubernetes does not support leveraging the built-in FluentBit.

Supported external destinations

You can only forward logs to one of the supported external destinations below. Each supported external destination contains example configuration for convenience.

Amazon CloudWatch

Sending to Amazon CloudWatch is only supported when Terraform Enterprise is located within AWS due to how Fluent Bit reads AWS credentials.

This example configuration forwards all logs to Amazon CloudWatch. Refer to the cloudwatch_logs Fluent Bit output plugin documentation for more information.

[OUTPUT]
    Name               cloudwatch_logs
    Match              *
    region             us-east-1
    log_group_name     example-log-group
    log_stream_name    example-log-stream
    auto_create_group  On

Note: In Terraform Enterprise installations using AWS external services, Fluent Bit will have access to the same AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables that are used for object storage.

Amazon S3

Sending to Amazon S3 is only supported when Terraform Enterprise is located within AWS due to how Fluent Bit reads AWS credentials.

This example configuration forwards all logs to Amazon S3. Refer to the s3 Fluent Bit output plugin documentation for more information.

[OUTPUT]
    Name                          s3
    Match                         *
    bucket                        example-bucket
    region                        us-east-1
    total_file_size               250M
    s3_key_format                 /$TAG/%Y/%m/%d/%H/%M/%S/$UUID.gz
    s3_key_format_tag_delimiters  .-

Note: In Terraform Enterprise installations using AWS external services, Fluent Bit will have access to the same AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables that are used for object storage.

Azure Blob Storage

This example configuration forwards all logs to Azure Blob Storage. Refer to the azure_blob Fluent Bit output plugin documentation for more information.

[OUTPUT]
    name                   azure_blob
    match                  *
    account_name           example-account-name
    shared_key             example-access-key
    path                   logs
    container_name         example-container-name
    auto_create_container  on
    tls                    on
Azure Log Analytics

This example configuration forwards all logs to Azure Log Analytics. Refer to the azure Fluent Bit output plugin documentation for more information.

[OUTPUT]
    name         azure
    match        *
    Customer_ID  example-log-analytics-workspace-id
    Shared_Key   example-access-key
Datadog

This example configuration forwards all logs to Datadog. Refer to the datadog Fluent Bit output plugin documentation for more information.

[OUTPUT]
    Name        datadog
    Match       *
    Host        http-intake.logs.datadoghq.com
    TLS         on
    compress    gzip
    apikey      example-api-key
    dd_service  terraform_enterprise
    dd_source   docker
    dd_tags     environment:development,owner:engineering
Forward

This example configuration forwards all logs to a listening Fluent Bit or Fluentd instance. Refer to the forward Fluent Bit output plugin documentation for more information.

[OUTPUT]
    Name   forward
    Match  *
    Host   fluent.example.com
    Port   24224
Google Cloud Platform Cloud Logging

Sending to Google Cloud Platform Cloud Logging is only supported when Terraform Enterprise is located within GCP due to how Fluent Bit reads GCP credentials.

This example configuration forwards all logs to Google Cloud Platform Cloud Logging (formerly known as Stackdriver). Refer to the stackdriver Fluent Bit output plugin documentation for more information.

[OUTPUT]
    Name       stackdriver
    Match      *
    location   us-east1
    namespace  terraform_enterprise
    node_id    example-hostname
    resource   generic_node

Note: In Terraform Enterprise installations using GCP external services, Fluent Bit will have access to the GOOGLE_SERVICE_CREDENTIALS environment variable that points to a file containing the same GCP Service Account JSON credentials that are used for object storage.

Splunk Enterprise HTTP Event Collector (HEC)

This example configuration forwards all logs to Splunk Enterprise via the HTTP Event Collector (HEC) interface. Refer to the splunk Fluent Bit output plugin documentation for more information.

[OUTPUT]
    Name          splunk
    Match         *
    Host          example-splunk-hec-endpoint
    Port          8088
    Splunk_Token  example-splunk-token
Syslog

This example configuration forwards all logs to a Syslog-compatible endpoint. Refer to the syslog Fluent Bit output plugin documentation for more information.

[OUTPUT]
    Name                 syslog
    Match                *
    host                 example-syslog-host
    port                 514
    mode                 tcp
    syslog_message_key   log
    syslog_severity_key  PRIORITY
    syslog_hostname_key  _HOSTNAME
    syslog_appname_key   SYSLOG_IDENTIFIER
    syslog_procid_key    _PID

Warning

The `syslog_message_key` should not be changed from `log`. If that value is changed, the application will no longer forward logs.

Metrics collection is disabled by default. Set the TFE_METRICS_ENABLE variable to true in your runtime configuration. Kubernetes and Podman installations do not emit tfe.container.* metrics. Refer to the configuration reference for additional details.

Access metrics

Terraform Enterprise exposes metrics on a port separate from the application. This allows operators to use network access controls to restrict access to metrics data to authorized consumers, such as a Prometheus server.

By default, metrics are exposed on the following ports:

You can configure the ports by setting the TFE_METRICS_HTTP_PORT and TFE_METRICS_HTTPS_PORT environment variables. Refer to the configuration reference for additional details.

The HTTP and HTTPS ports serve metrics on the path /metrics.

By default, requests to the /metrics endpoint will emit metrics in JSON format. Use the query parameter ?format=prometheus to emit metrics in Prometheus format.

When using Prometheus, we recommend using a scrape interval shorter than the expiration time of 15 seconds to ensure that Terraform Enterprise reports data points from short-lived processes.

Refer to the metrics reference for details about the metrics Terraform Enterprise emits.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4