A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://cloud.google.com/stackdriver/docs/solutions/agents/ops-agent/configuration below:

Configure the Ops Agent | Google Cloud Observability

This document provides details about the Ops Agent's default and custom configurations. Read this document if any of the following applies to you:

Configuration model

The Ops Agent uses a built-in default configuration; you can't directly modify this built-in configuration. Instead, you create a file of overrides that are merged with the built-in configuration when the agent restarts.

The building blocks of the configuration are as follows:

The built-in configuration is made up of these elements, and you use the same elements to override that built-in configuration.

Note: The Ops Agent sends logs to Cloud Logging and metrics to Cloud Monitoring. You can't configure the agent to export logs or metrics to other services. You can, however, configure Cloud Logging to export logs; for more information, see Route logs to supported destinations. Built-in configuration

The built-in configuration for the Ops Agent defines the default collection for logs and metrics. The following shows the built-in configuration for Linux and for Windows:

Linux

By default, the Ops Agent collects file-based syslog logs and host metrics.

For more information about the metrics collected, see Metrics ingested by the receivers.

Windows

By default, the Ops Agent collects Windows event logs from System, Application, and Security channels, as well as host metrics, IIS metrics, and SQL Server metrics.

For more information about the metrics collected, see Metrics ingested by the receivers.

These configurations are discussed in more detail in Logging configuration and Metrics configuration.

User-specified configuration

To override the built-in configuration, you add new configuration elements to the user configuration file. Put your configuration for the Ops Agent in the following files:

Any user-specified configuration is merged with the built-in configuration when the agent restarts.

Note: If you make any configuration changes, then you must restart the agent to apply the updated configurations.

To override a built-in receiver, processor, or pipeline, redefine it in your config.yaml file by declaring it with the same identifier. Starting with Ops Agent version 2.31.0, you can also configure the agent's log-rotation feature; for more information, see Configure log rotation in the Ops Agent.

For example, the built-in configuration for metrics includes a hostmetrics receiver that specifies a 60-second collection interval. To change the collection interval for host metrics to 30 seconds, include a metrics receiver called hostmetrics in your config.yaml file that sets the collection_interval value to 30 seconds, as shown in the following example:

metrics:
  receivers:
    hostmetrics:
      type: hostmetrics
      collection_interval: 30s

For other examples of changing the built-in configurations, see Logging configuration and Metrics configuration. You can also turn off the collection of logging or metric data. These changes are described in the example logging service configurations and metrics service configurations.

You can use this file to prevent the agent from collecting self logs and sending those logs to Cloud Logging. For more information, see Collection of self logs.

You also configure the agent's log-rotation feature by using this file; for more information, see Configure log rotation in the Ops Agent.

You can't configure the Ops Agent to export logs or metrics to services other than Cloud Logging and Cloud Monitoring.

Logging configurations

The logging configuration uses the configuration model described previously:

Each receiver and each processor can be used in multiple pipelines.

The following sections describe each of these elements.

The Ops Agent sends logs to Cloud Logging. You can't configure it to export logs to other services. You can, however, configure Cloud Logging to export logs; for more information, see Route logs to supported destinations.

Logging receivers

The receivers element contains a set of receivers, each identified by a RECEIVER_ID. A receiver describes how to retrieve the logs; for example, by tailing files, by using a TCP port, or from the Windows Event Log.

Structure of logging receivers

Each receiver must have an identifier, RECEIVER_ID, and include a type element. The valid types are:

The receivers structure looks like the following:

receivers:
  RECEIVER_ID:
    type: files
    ...
  RECEIVER_ID_2:
    type: syslog
    ...

Depending on the value of the type element, there might be other configuration options, as follows:

Examples of logging receivers

Sample files receiver:

receivers:
  RECEIVER_ID:
    type: files

    include_paths: [/var/log/*.log]
    exclude_paths: [/var/log/not-this-one.log]
    record_log_file_path: true

Sample fluent_forward receiver:

Note: Tags from incoming records are preserved in the LogEntry's logName field as a dot-separated suffix (logName = "projects/PROJECT_ID/logs/RECEIVER_ID.TAG").
receivers:
  RECEIVER_ID:
    type: fluent_forward

    listen_host: 127.0.0.1
    listen_port: 24224

Sample syslog receiver (Linux only):

receivers:
  RECEIVER_ID:
    type: syslog

    transport_protocol: tcp
    listen_host: 0.0.0.0
    listen_port: 5140

Sample tcp receiver:

receivers:
  RECEIVER_ID:
    type: tcp

    format: json
    listen_host: 127.0.0.1
    listen_port: 5170

Sample windows_event_log receiver (Windows only):

receivers:
  RECEIVER_ID:
    type: windows_event_log

    channels: [System,Application,Security]

Sample windows_event_log receiver that overrides the built-in receiver to use version 2:

receivers:
  windows_event_log:
    type: windows_event_log

    channels: [System,Application,Security]
    receiver_version: 2

Sample systemd_journald receiver:

receivers:
  RECEIVER_ID:
    type: systemd_journald
Special fields in structured payloads

For processors and receivers that can ingest structured data (the fluent_forward and tcp receivers and the parse_json processor), you can set special fields in the input that will map to specific fields in the LogEntry object that the agent writes to the Logging API.

When the Ops Agent receives external structured log data, it places top-level fields into the LogEntry's jsonPayload field unless the field name is listed in the following table:

Record field LogEntry field

Option 1


"timestamp": {
  "seconds": CURRENT_SECONDS,
  "nanos": CURRENT_NANOS,
}

Option 2


{
  "timestampSeconds": CURRENT_SECONDS,
  "timestampNanos": CURRENT_NANOS,
}
timestamp receiver_id (not a record field) logName logging.googleapis.com/httpRequest (HttpRequest) httpRequest logging.googleapis.com/severity (string) severity logging.googleapis.com/labels (struct of string:string) labels logging.googleapis.com/operation (struct) operation logging.googleapis.com/sourceLocation (struct) sourceLocation logging.googleapis.com/trace (string) trace logging.googleapis.com/spanId (string) spanId

Any remaining structured record fields remain part of the jsonPayload structure.

Common Linux log files

The following table lists common log files for frequently used Linux applications:

Application Common log files apache For information about Apache log files, see Monitoring third-party applications: Apache Web Server. cassandra For information about Cassandra log files, see Monitoring third-party applications: Cassandra. chef /var/log/chef-server/bookshelf/current
/var/log/chef-server/chef-expander/current
/var/log/chef-server/chef-pedant/http-traffic.log
/var/log/chef-server/chef-server-webui/current
/var/log/chef-server/chef-solr/current
/var/log/chef-server/erchef/current
/var/log/chef-server/erchef/erchef.log.1
/var/log/chef-server/nginx/access.log
/var/log/chef-server/nginx/error.log
/var/log/chef-server/nginx/rewrite-port-80.log
/var/log/chef-server/postgresql/current
gitlab /home/git/gitlab/log/application.log
/home/git/gitlab/log/githost.log
/home/git/gitlab/log/production.log
/home/git/gitlab/log/satellites.log
/home/git/gitlab/log/sidekiq.log
/home/git/gitlab/log/unicorn.stderr.log
/home/git/gitlab/log/unicorn.stdout.log
/home/git/gitlab-shell/gitlab-shell.log
jenkins /var/log/jenkins/jenkins.log
jetty /var/log/jetty/out.log
/var/log/jetty/*.request.log
/var/log/jetty/*.stderrout.log
joomla /var/www/joomla/logs/*.log magento /var/www/magento/var/log/exception.log
/var/www/magento/var/log/system.log
/var/www/magento/var/report/*
mediawiki /var/log/mediawiki/*.log memcached For information about Memcached log files, see Monitoring third-party applications: Memcached. mongodb For information about MongoDB log files, see Monitoring third-party applications: MongoDB. mysql For information about MySQL log files, see Monitoring third-party applications: MySQL. nginx For information about nginx log files, see Monitoring third-party applications: nginx. postgres For information about PostgreSQL log files, see Monitoring third-party applications: PostgreSQL. puppet /var/log/puppet/http.log
/var/log/puppet/masterhttp.log
puppet-enterprise /var/log/pe-activemq/activemq.log
/var/log/pe-activemq/wrapper.log
/var/log/pe-console-auth/auth.log
/var/log/pe-console-auth/cas_client.log
/var/log/pe-console-auth/cas.log
/var/log/pe-httpd/access.log
/var/log/pe-httpd/error.log
/var/log/pe-httpd/other_vhosts_access.log
/var/log/pe-httpd/puppetdashboard.access.log
/var/log/pe-httpd/puppetdashboard.error.log
/var/log/pe-httpd/puppetmasteraccess.log
/var/log/pe-mcollective/mcollective_audit.log
/var/log/pe-mcollective/mcollective.log
/var/log/pe-puppet-dashboard/certificate_manager.log
/var/log/pe-puppet-dashboard/event-inspector.log
/var/log/pe-puppet-dashboard/failed_reports.log
/var/log/pe-puppet-dashboard/live-management.log
/var/log/pe-puppet-dashboard/mcollective_client.log
/var/log/pe-puppet-dashboard/production.log
/var/log/pe-puppetdb/pe-puppetdb.log
/var/log/pe-puppet/masterhttp.log
/var/log/pe-puppet/rails.log
rabbitmq For information about RabbitMQ log files, see Monitoring third-party applications: RabbitMQ. redis For information about Redis log files, see Monitoring third-party applications: Redis. redmine /var/log/redmine/*.log salt /var/log/salt/key
/var/log/salt/master
/var/log/salt/minion
/var/log/salt/syndic.loc
solr For information about Apache Solr log files, see Monitoring third-party applications: Apache Solr. sugarcrm /var/www/*/sugarcrm.log syslog /var/log/syslog
/var/log/messages
tomcat For information about Apache Tomcat log files, see Monitoring third-party applications: Apache Tomcat. zookeeper For information about Apache ZooKeeper log files, see Monitoring third-party applications: Apache ZooKeeper. Default ingested labels

Logs can contain the following labels by default in the LogEntry:

Field Sample Value Description labels."compute.googleapis.com/resource_name" test_vm The name of the virtual machine from which this log originates. Written for all logs. labels."logging.googleapis.com/instrumentation_source" agent.googleapis.com/apache_access The value of the receiver type from which thus log originates, prefixed by agent.googleapis.com/. Written only by receivers from third-party integrations. Logging processors

The optional processors element contains a set of processing directives, each identified by a PROCESSOR_ID. A processor describes how to manipulate the information collected by a receiver.

Each processor must have a unique identifier and include a type element. The valid types are:

The processors structure looks like the following:

processors:
  PROCESSOR_ID:
    type: parse_json
    ...
  PROCESSOR_ID_2:
    type: parse_regex
    ...

Depending on the value of the type element, there are other configuration options, as follows.

parse_json processor Configuration structure
processors:
  PROCESSOR_ID:
    type: parse_json

    time_key:    <field name within jsonPayload>
    time_format: <strptime format string>

The parse_json processor parses the input JSON into the jsonPayload field of the LogEntry. Other parts of the LogEntry can be parsed by setting certain special top-level fields.

Example configuration
processors:
  PROCESSOR_ID:
    type: parse_json

    time_key:    time
    time_format: "%Y-%m-%dT%H:%M:%S.%L%Z"
parse_multiline processor Configuration structure
processors:
  PROCESSOR_ID:
    type: parse_multiline

    match_any:
    - type: <type of the exceptions>
      language: <language name>
Example configuration
logging:
  receivers:
    custom_file1:
      type: files
      include_paths:
      - /tmp/test-multiline28
  processors:
    parse_java_multiline:
      type: parse_multiline
      match_any:
      - type: language_exceptions
        language: java
    extract_structure:
      type: parse_regex
      field: message
      regex: "^(?<time>[\d-]*T[\d:.Z]*) (?<severity>[^ ]*) (?<file>[^ :]*):(?<line>[\d]*) - (?<message>(.|\\n)*)$"
      time_key: time
      time_format: "%Y-%m-%dT%H:%M:%S.%L"
    move_severity:
      type: modify_fields
      fields:
        severity:
          move_from: jsonPayload.severity
  service:
    pipelines:
      pipeline1:
        receivers: [custom_file1]
        processors: [parse_java_multiline, extract_structure, move_severity]

In the extract_structure processor, the field: message statement means that the regular expression is applied to the log entry's jsonPayload.message field. By default, the files receiver places each line of the log file into a log entry with a single payload field called jsonPayload.message.

The extract_structure processor places extracted fields into subfields of the LogEntry.jsonPayload field. Other statements in the YAML file cause two of the extracted fields, time and severity, to be moved. The time_key: time statement pulls the LogEntry.jsonPayload.time field, parses the timestamp, and then adds the LogEntry.timestamp field. The move_severity processor moves the severity field from the LogEntry.jsonPayload.severity field to the LogEntry.severity field.

Example log file:

2022-10-17T22:00:00.187512963Z ERROR HelloWorld:16 - javax.servlet.ServletException: Something bad happened
    at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:60)
    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
    at com.example.myproject.ExceptionHandlerFilter.doFilter(ExceptionHandlerFilter.java:28)
    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
    at com.example.myproject.OutputBufferFilter.doFilter(OutputBufferFilter.java:33)
Caused by: com.example.myproject.MyProjectServletException
    at com.example.myproject.MyServlet.doPost(MyServlet.java:169)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166)
    at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:30)
    ... 27 common frames omitted

The agent ingests each line from the log file into Cloud Logging in the following format:

{
  "insertId": "...",
  "jsonPayload": {
    "line": "16",
    "message": "javax.servlet.ServletException: Something bad happened\n    at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:60)\n    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)\n    at com.example.myproject.ExceptionHandlerFilter.doFilter(ExceptionHandlerFilter.java:28)\n    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)\n    at com.example.myproject.OutputBufferFilter.doFilter(OutputBufferFilter.java:33)\nCaused by: com.example.myproject.MyProjectServletException\n    at com.example.myproject.MyServlet.doPost(MyServlet.java:169)\n    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)\n    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)\n    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)\n    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166)\n    at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:30)\n    ... 27 common frames omitted\n",
    "file": "HelloWorld"
  },
  "resource": {
    "type": "gce_instance",
    "labels": {
      "instance_id": "...",
      "project_id": "...",
      "zone": "..."
    }
  },
  "timestamp": "2022-10-17T22:00:00.187512963Z",
  "severity": "ERROR",
  "labels": {
    "compute.googleapis.com/resource_name": "..."
  },
  "logName": "projects/.../logs/custom_file",
  "receiveTimestamp": "2022-10-18T03:12:38.430364391Z"
}
parse_regex processor Configuration structure
processors:
  PROCESSOR_ID:
    type: parse_regex

    regex: <regular expression>

    time_key:    <field name within jsonPayload>
    time_format: <format string>
Example configuration
processors:
  PROCESSOR_ID:
    type: parse_regex

    regex:       "^(?<time>[^ ]*) (?<severity>[^ ]*) (?<msg>.*)$"
    time_key:    time
    time_format: "%Y-%m-%dT%H:%M:%S.%L%Z"
exclude_logs processor Configuration structure:
type: exclude_logs
match_any:
  - <filter>
  - <filter>

The top-level configuration for this processor contains a single field, match_any, which contains a list of filter rules.

Note: Using the exclude_logs processor has performance implications, so it's recommended that you avoid using this processor if you can exclude logs from the source or if you can set up exclusion filters. If you need to use the exclude_logs processor, we recommend the following best practices: Example configuration
processors:
  PROCESSOR_ID:
    type: exclude_logs
    match_any:
    - '(jsonPayload.message =~ "log spam 1" OR jsonPayload.message =~ "log spam 2") AND severity = "ERROR"'
    - 'jsonPayload.application = "foo" AND severity = "INFO"'
modify_fields Processor

The modify_fields processor allows customization of the structure and contents of log entries.

Configuration structure
type: modify_fields
fields:
  <destination field>:
    # Source
    move_from: <source field>
    copy_from: <source field>
    static_value: <string>
    
    # Mutation
    default_value: <string>
    map_values:
      <old value>: <new value>
    type: {integer|float}
    omit_if: <filter>

The top-level configuration for this processor contains a single field, fields, which contains a map of output field names and corresponding translations. For each output field, an optional source and zero or more mutation operations are applied.

All field names use the dot-separated syntax from the Cloud Logging query language. Filters use the Cloud Logging query language.

All transformations are applied in parallel, which means that sources and filters operate on the original input log entry and therefore can not reference the new value of any other fields being modified by the same processor.

Source options: At most one specified source is allowed.

Mutation options: Zero or more mutation operators may be applied to a single field. If multiple operators are supplied, they will always be applied in the following order.

  1. default_value: <string>

    If the source field did not exist, the output value will be set to <string>. If the source field already exists (even if it contains an empty string), the original value is unmodified.

  2. map_values: <map>

    If the input value matches one of the keys in <map>, the output value will be replaced with the corresponding value from the map.

  3. map_values_exclusive: {true|false}

    In case the <source field> value does not match any keys specified in the map_values pairs, the destination field will be forcefully unset if map_values_exclusive is true, or left untouched if map_values_exclusive is false.

  4. type: {integer|float}

    The input value will be converted to an integer or a float. If the string cannot be converted to a number, the output value will be unset. If the string contains a float but the type is specified as integer, the number will be truncated to an integer.

    Note that the Cloud Logging API uses JSON and therefore it does not support a full 64-bit integer; if a 64-bit (or larger) integer is needed, it must be stored as a string in the log entry.

  5. omit_if: <filter>

    If the filter matches the input log record, the output field will be unset. This can be used to remove placeholder values, such as:

    httpRequest.referer:
      move_from: jsonPayload.referer
      omit_if: httpRequest.referer = "-"
    
Sample Configurations

The parse_json processor would transform a JSON file containing

{
  "http_status": "400",
  "path": "/index.html",
  "referer": "-"
}

into a LogEntry structure that looks like this:

{
  "jsonPayload": {
    "http_status": "400",
    "path": "/index.html",
    "referer": "-"
  }
}

This could then be transformed with modify_fields into this LogEntry:

{
  "httpRequest": {
    "status": 400,
    "requestUrl": "/index.html",
  }
}

by using this Ops Agent configuration:

logging:
  receivers:
    in:
      type: files
      include_paths:
      - /var/log/http.json
  processors:
    parse_json:
      type: parse_json
    set_http_request:
      type: modify_fields
      fields:
        httpRequest.status:
          move_from: jsonPayload.http_status
          type: integer
        httpRequest.requestUrl:
          move_from: jsonPayload.path
        httpRequest.referer:
          move_from: jsonPayload.referer
          omit_if: jsonPayload.referer = "-"
  service:
    pipelines:
      pipeline:
        receivers: [in]
        processors: [parse_json, set_http_request]

This configuration reads JSON-formatted logs from /var/log/http.json and populates part of the httpRequest structure from fields in the logs.

Logging service

The logging service customizes verbosity for the Ops Agent's own logs, and links logging receivers and processors together into pipelines. The service section has the following elements:

Log verbosity level

The log_level field, available with Ops Agent versions 2.6.0 and later, customizes verbosity for Ops Agent logging submodule's own logs. The default is info. Available options are: error, warn, info, debug, trace.

Warning: Setting the log_level to debug (or trace) triggers a feedback loop in the logging sub-agent, resulting in a continuous stream of logs. This is a known issue, and is being addressed. In the meantime, do not set log_level to anything above info.

The following configuration customizes log verbosity for the logging submodule to be debug instead:

logging:
  service:
    log_level: debug
Logging pipelines

The pipelines field can contain multiple pipeline IDs and definitions. Each pipeline value consists of the following elements:

Example logging service configurations

A service configuration has the following structure:

service:
  log_level: CUSTOM_LOG_LEVEL
  pipelines:
    PIPELINE_ID:
      receivers:  [...]
      processors: [...]
    PIPELINE_ID_2:
      receivers:  [...]
      processors: [...]

To stop the agent from collecting and sending either /var/log/message or /var/log/syslog entries, redefine the default pipeline with an empty receivers list and no processors. This configuration does not stop the agent's logging subcomponent, because the agent must be able to collect logs for the monitoring subcomponent. The entire empty logging configuration looks like the following:

logging:
  service:
    pipelines:
      default_pipeline:
        receivers: []

The following service configuration defines a pipeline with the ID custom_pipeline:

logging:
  service:
    pipelines:
      custom_pipeline:
        receivers:
        - RECEIVER_ID
        processors:
        - PROCESSOR_ID
Metrics configurations

The metrics configuration uses the configuration model described previously:

The following sections describe each of these elements.

The Ops Agent sends metrics to Cloud Monitoring. You can't configure it to export metrics to other services.

Metrics receivers

The receivers element contains a set of receiver definitions. A receiver describes from where to retrieve the metrics, such as cpu and memory. A receiver can be shared among multiple pipelines.

Structure of metrics receivers

Each receiver must have an identifier, RECEIVER_ID, and include a type element. Valid built-in types are:

A receiver can also specify the operation collection_interval option. The value is in the format of a duration, for example, 30s or 2m. The default value is 60s.

Each of these receiver types collects a set of metrics; for information about the specific metrics included, see Metrics ingested by the receivers.

You can create only one receiver for each type. For example, you can't define two receivers of type hostmetrics.

Changing the collection interval in the metrics receivers

Some critical workloads might require fast alerting. By reducing the the collection interval for the metrics, you can configure more sensitive alerts. For information on how alerts are evaluated, see Behavior of metric-based alerting policies.

For example, the following receiver changes the collection interval for host metrics (the receiver ID is hostmetrics) from the default of 60 seconds to 10 seconds:

metrics:
  receivers:
    hostmetrics:
      type: hostmetrics
      collection_interval: 10s

You can also override the collection interval for the Windows iis and mssql metrics receivers using the same technique.

Metrics ingested by the receivers

The metrics ingested by the Ops Agent have identifiers that begin with the following pattern: agent.googleapis.com/GROUP. The GROUP component identifies a set of related metrics; it has values like cpu, network, and others.

The hostmetrics receiver

The hostmetrics receiver ingests the following metric groups. For more information, see the linked section for each group on the Ops Agent metrics page.

Group Metric cpu CPU load at 1 minute intervals
CPU load at 5 minute intervals
CPU load at 15 minute intervals
CPU usage, with labels for CPU number and CPU state
CPU usage percent, with labels for CPU number and CPU state disk Disk bytes read, with label for device
Disk bytes written, with label for device
Disk I/O time, with label for device
Disk weighted I/O time, with label for device
Disk pending operations, with label for device
Disk merged operations, with labels for device and direction
Disk operations, with labels for device and direction
Disk operation time, with labels for device and direction
Disk usage, with labels for device and state
Disk utilization, with labels for device and state gpu
Linux only; see About the gpu metrics for other important information. Current number of GPU memory bytes used, by state
Maximum amount of GPU memory, in bytes, that has been allocated by the process
Percentage of time in the process lifetime that one or more kernels has been running on the GPU
Percentage of time, since last sample, the GPU has been active interface
Linux only Total count of network errors
Total count of packets sent over the network
Total number of bytes sent over the network memory Memory usage, with label for state (buffered, cached, free, slab, used)
Memory usage percent, with label for state (buffered, cached, free, slab, used) network TCP connection count, with labels for port and TCP state swap Swap I/O operations, with label for direction
Swap bytes used, with labels for device and state
Swap percent used, with labels for device and state pagefile
Windows only Current percentage of pagefile used by state processes Processes count, with label for state
Processes forked count
Per-process disk read I/O, with labels for process name + others
Per-process disk write I/O, with labels for process name + others
Per-process RSS usage, with labels for process name + others
Per-process VM usage, with labels for process name + others About the gpu metrics:

The hostmetrics receiver collects metrics reported by the NVIDIA Management Library (NVML) as agent.googleapis.com/gpu metrics.

To collect these metrics, you must create your VM with attached GPUs and install the GPU driver. The hostmetrics receiver doesn't collect these metrics on VMs with no attached GPUs.

Only Ops Agent version 2.38.0 or versions 2.41.0 or higher are compatible with GPU monitoring. Do not install Ops Agent versions 2.39.0 and 2.40.0 on VMs with attached GPUs. For more information, see Agent crashes and report mentions NVIDIA.

You can install or upgrade the NVIDIA GPU driver by using package managers or local installation scripts. When using local installation scripts, the Ops Agent service must be stopped before the driver installation can proceed. To stop the agent, run the following command:

sudo systemctl stop google-cloud-ops-agent

You must also reboot the VM after installing or upgrading an NVIDIA GPU driver.

The iis receiver (Windows only)

The iis receiver (Windows only) ingests metrics of the iis group. For more information, see the Agent metrics page.

Group Metric iis
Windows only Currently open connections to IIS
Network bytes transferred by IIS
Connections opened to IIS
Requests made to IIS The mssql receiver (Windows only)

The mssql receiver (Windows only) ingests metrics of the mssql group. For more information, see the Ops Agent metrics page.

Group Metric mssql
Windows only Currently open connections to SQL server
SQL server total transactions per second
SQL server write transactions per second Metrics processors

The processor element contains a set of processor definitions. A processor describes metrics from the receiver type to exclude. The only supported type is exclude_metrics, which takes a metrics_pattern option. The value is a list of globs that match the Ops Agent metric types you want to exclude from the group collected by a receiver. For example:

Sample metrics processor

The following example shows the exclude_metrics processor supplied in the built-in configurations. This processor supplies an empty metrics_pattern value, so it doesn't exclude any metrics.

processors:
  metrics_filter:
    type: exclude_metrics
    metrics_pattern: []

To disable the collection of all process metrics by the Ops Agent, add the following to your config.yaml file:

metrics:
  processors:
    metrics_filter:
      type: exclude_metrics
      metrics_pattern:
      - agent.googleapis.com/processes/*

This excludes process metrics from collection in the metrics_filter processor that applies to the default pipeline in the metrics service.

Metrics service

The metrics service customizes verbosity for the Ops Agent metrics module's own logs and links metrics receivers and processors together into pipelines. The service section has two elements: log_level and pipelines.

Metrics verbosity level

log_level, available with Ops Agent versions 2.6.0 and later, customizes verbosity for Ops Agent metrics submodule's own logs. The default is info. Available options are: error, warn, info, debug.

Metrics pipelines

The service section has a single element, pipelines, which can contain multiple pipeline IDs and definitions. Each pipeline definition consists of the following elements:

Example metrics service configurations

A service configuration has the following structure:

service:
  log_level: CUSTOM_LOG_LEVEL
  pipelines:
    PIPELINE_ID:
      receivers:  [...]
      processors: [...]
    PIPELINE_ID_2:
      receivers:  [...]
      processors: [...]

To turn off the built-in ingestion of host metrics, redefine the default pipeline with an empty receivers list and no processors. The entire metrics configuration looks like the following:

metrics:
  service:
    pipelines:
      default_pipeline:
        receivers: []

The following example shows the built-in service configuration for Windows:

metrics:
  service:
    pipelines:
      default_pipeline:
        receivers:
        - hostmetrics
        - iis
        - mssql
        processors:
        - metrics_filter

The following service configuration customizes log verbosity for the metrics submodule to be debug instead:

metrics:
  service:
    log_level: debug
Collection of self logs

By default, the Ops Agent's Fluent Bit self logs are sent to Cloud Logging. These logs can include a lot of information, and the additional volume might increase your costs to use Cloud Logging.

You can disable the collection of these self logs, starting with Ops Agent version 2.44.0, by using the default_self_log_file_collection option.

To disable self-log collection, add a global section to your user-specified configuration file and set the default_self_log_file_collection option to the value false:

logging:  ...
metrics:  ...
global:
  default_self_log_file_collection: false
Log-rotation configuration

Starting with Ops Agent version 2.31.0, you can also set up the agent's log-rotation feature by using the configuration files. For more information, see Configure log rotation in the Ops Agent.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4