A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.aws.amazon.com/sdkforruby/api/Aws/CloudWatchLogs/Client.html below:

Client — AWS SDK for Ruby V2

You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.

Class: Aws::CloudWatchLogs::Client Overview

An API client for Amazon CloudWatch Logs. To construct a client, you need to configure a :region and :credentials.

cloudwatchlogs = Aws::CloudWatchLogs::Client.new(
  region: region_name,
  credentials: credentials,
  )

See #initialize for a full list of supported configuration options.

Region

You can configure a default region in the following locations:

Go here for a list of supported regions.

Credentials

Default credentials are loaded automatically from the following locations:

You can also construct a credentials object from one of the following classes:

Alternatively, you configure credentials with :access_key_id and :secret_access_key:

creds = YAML.load(File.read('/path/to/secrets'))

Aws::CloudWatchLogs::Client.new(
  access_key_id: creds['access_key_id'],
  secret_access_key: creds['secret_access_key']
)

Always load your credentials from outside your application. Avoid configuring credentials statically and never commit them to source control.

Attribute Summary collapse Instance Attribute Summary Attributes inherited from Seahorse::Client::Base

#config, #handlers

Constructor collapse API Operations collapse Instance Method Summary collapse Methods inherited from Seahorse::Client::Base

add_plugin, api, #build_request, clear_plugins, define, new, #operation, #operation_names, plugins, remove_plugin, set_api, set_plugins

Methods included from Seahorse::Client::HandlerBuilder

#handle, #handle_request, #handle_response

Instance Method Details #associate_kms_key(options = {}) ⇒ Struct

Associates the specified AWS Key Management Service (AWS KMS) customer master key (CMK) with the specified log group.

Associating an AWS KMS CMK with a log group overrides any existing associations between the log group and a CMK. After a CMK is associated with a log group, all newly ingested data for the log group is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.

CloudWatch Logs supports only symmetric CMKs. Do not use an associate an asymmetric CMK with your log group. For more information, see Using Symmetric and Asymmetric Keys.

It can take up to 5 minutes for this operation to take effect.

If you attempt to associate a CMK with a log group but the CMK does not exist or the CMK is disabled, you receive an InvalidParameterException error.

#cancel_export_task(options = {}) ⇒ Struct

Cancels the specified export task.

The task must be in the PENDING or RUNNING state.

#create_export_task(options = {}) ⇒ Types::CreateExportTaskResponse

Creates an export task, which allows you to efficiently export data from a log group to an Amazon S3 bucket. When you perform a CreateExportTask operation, you must use credentials that have permission to write to the S3 bucket that you specify as the destination.

This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use DescribeExportTasks to get the status of the export task. Each account can only have one active (RUNNING or PENDING) export task at a time. To cancel an export task, use CancelExportTask.

You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate out log data for each export task, you can specify a prefix to be used as the Amazon S3 key prefix for all exported objects.

Exporting to S3 buckets that are encrypted with AES-256 is supported. Exporting to S3 buckets encrypted with SSE-KMS is not supported.

#create_log_group(options = {}) ⇒ Struct

Creates a log group with the specified name. You can create up to 20,000 log groups per account.

You must use the following guidelines when naming a log group:

When you create a log group, by default the log events in the log group never expire. To set a retention policy so that events expire and are deleted after a specified time, use PutRetentionPolicy.

If you associate a AWS Key Management Service (AWS KMS) customer master key (CMK) with the log group, ingested data is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.

If you attempt to associate a CMK with the log group but the CMK does not exist or the CMK is disabled, you receive an InvalidParameterException error.

CloudWatch Logs supports only symmetric CMKs. Do not associate an asymmetric CMK with your log group. For more information, see Using Symmetric and Asymmetric Keys.

#create_log_stream(options = {}) ⇒ Struct

Creates a log stream for the specified log group. A log stream is a sequence of log events that originate from a single source, such as an application instance or a resource that is being monitored.

There is no limit on the number of log streams that you can create for a log group. There is a limit of 50 TPS on CreateLogStream operations, after which transactions are throttled.

You must use the following guidelines when naming a log stream:

#delete_destination(options = {}) ⇒ Struct

Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination.

#delete_log_group(options = {}) ⇒ Struct

Deletes the specified log group and permanently deletes all the archived log events associated with the log group.

#delete_log_stream(options = {}) ⇒ Struct

Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream.

#delete_metric_filter(options = {}) ⇒ Struct

Deletes the specified metric filter.

#delete_query_definition(options = {}) ⇒ Types::DeleteQueryDefinitionResponse

Deletes a saved CloudWatch Logs Insights query definition. A query definition contains details about a saved CloudWatch Logs Insights query.

Each DeleteQueryDefinition operation can delete one query definition.

You must have the logs:DeleteQueryDefinition permission to be able to perform this operation.

#delete_resource_policy(options = {}) ⇒ Struct

Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account.

#delete_retention_policy(options = {}) ⇒ Struct

Deletes the specified retention policy.

Log events do not expire if they belong to log groups without a retention policy.

#delete_subscription_filter(options = {}) ⇒ Struct

Deletes the specified subscription filter.

#describe_export_tasks(options = {}) ⇒ Types::DescribeExportTasksResponse

Lists the specified export tasks. You can list all your export tasks or filter the results based on task ID or task status.

#describe_log_groups(options = {}) ⇒ Types::DescribeLogGroupsResponse

Lists the specified log groups. You can list all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name.

#describe_log_streams(options = {}) ⇒ Types::DescribeLogStreamsResponse

Lists the log streams for the specified log group. You can list all the log streams or filter the results by prefix. You can also control how the results are ordered.

This operation has a limit of five transactions per second, after which transactions are throttled.

#describe_metric_filters(options = {}) ⇒ Types::DescribeMetricFiltersResponse

Lists the specified metric filters. You can list all of the metric filters or filter the results by log name, prefix, metric name, or metric namespace. The results are ASCII-sorted by filter name.

#describe_queries(options = {}) ⇒ Types::DescribeQueriesResponse

Returns a list of CloudWatch Logs Insights queries that are scheduled, executing, or have been executed recently in this account. You can request all queries or limit it to queries of a specific log group or queries with a certain status.

#describe_query_definitions(options = {}) ⇒ Types::DescribeQueryDefinitionsResponse

This operation returns a paginated list of your saved CloudWatch Logs Insights query definitions.

You can use the queryDefinitionNamePrefix parameter to limit the results to only the query definitions that have names that start with a certain string.

#describe_subscription_filters(options = {}) ⇒ Types::DescribeSubscriptionFiltersResponse

Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name.

#disassociate_kms_key(options = {}) ⇒ Struct

Disassociates the associated AWS Key Management Service (AWS KMS) customer master key (CMK) from the specified log group.

After the AWS KMS CMK is disassociated from the log group, AWS CloudWatch Logs stops encrypting newly ingested data for the log group. All previously ingested data remains encrypted, and AWS CloudWatch Logs requires permissions for the CMK whenever the encrypted data is requested.

Note that it can take up to 5 minutes for this operation to take effect.

#filter_log_events(options = {}) ⇒ Types::FilterLogEventsResponse

Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.

By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 log events) or all the events found within the time range that you specify. If the results include a token, then there are more log events available, and you can get additional results by specifying the token in a subsequent call. This operation can return empty results while there are more log events available through the token.

The returned log events are sorted by event timestamp, the timestamp when the event was ingested by CloudWatch Logs, and the ID of the PutLogEvents request.

#get_log_events(options = {}) ⇒ Types::GetLogEventsResponse

Lists log events from the specified log stream. You can list all of the log events or filter using a time range.

By default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). You can get additional log events by specifying one of the tokens in a subsequent call. This operation can return empty results while there are more log events available through the token.

#get_log_group_fields(options = {}) ⇒ Types::GetLogGroupFieldsResponse

Returns a list of the fields that are included in log events in the specified log group, along with the percentage of log events that contain each field. The search is limited to a time period that you specify.

In the results, fields that start with @ are fields generated by CloudWatch Logs. For example, @timestamp is the timestamp of each log event. For more information about the fields that are generated by CloudWatch logs, see Supported Logs and Discovered Fields.

The response results are sorted by the frequency percentage, starting with the highest percentage.

#get_log_record(options = {}) ⇒ Types::GetLogRecordResponse

Retrieves all of the fields and values of a single log event. All fields are retrieved, even if the original query that produced the logRecordPointer retrieved only a subset of fields. Fields are returned as field name/field value pairs.

The full unparsed log event is returned within @message.

#get_query_results(options = {}) ⇒ Types::GetQueryResultsResponse

Returns the results from the specified query.

Only the fields requested in the query are returned, along with a @ptr field, which is the identifier for the log record. You can use the value of @ptr in a GetLogRecord operation to get the full log record.

GetQueryResults does not start a query execution. To run a query, use StartQuery.

If the value of the Status field in the output is Running, this operation returns only partial results. If you see a value of Scheduled or Running for the status, you can retry the operation later to see the final results.

#put_destination(options = {}) ⇒ Types::PutDestinationResponse

Creates or updates a destination. This operation is used only to create destinations for cross-account subscriptions.

A destination encapsulates a physical resource (such as an Amazon Kinesis stream) and enables you to subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents.

Through an access policy, a destination controls what is written to it. By default, PutDestination does not set any access policy with the destination, which means a cross-account user cannot call PutSubscriptionFilter against this destination. To enable this, the destination owner must call PutDestinationPolicy after PutDestination.

To perform a PutDestination operation, you must also have the iam:PassRole permission.

#put_destination_policy(options = {}) ⇒ Struct

Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination.

#put_log_events(options = {}) ⇒ Types::PutLogEventsResponse

Uploads a batch of log events to the specified log stream.

You must include the sequence token obtained from the response of the previous call. An upload in a newly created log stream does not require a sequence token. You can also get the sequence token in the expectedSequenceToken field from InvalidSequenceTokenException. If you call PutLogEvents twice within a narrow time period using the same value for sequenceToken, both calls might be successful or one might be rejected.

The batch of events must satisfy the following constraints:

If a call to PutLogEvents returns "UnrecognizedClientException" the most likely cause is an invalid AWS access key ID or secret key.

#put_metric_filter(options = {}) ⇒ Struct

Creates or updates a metric filter and associates it with the specified log group. Metric filters allow you to configure rules to extract metric data from log events ingested through PutLogEvents.

The maximum number of metric filters that can be associated with a log group is 100.

#put_query_definition(options = {}) ⇒ Types::PutQueryDefinitionResponse

Creates or updates a query definition for CloudWatch Logs Insights. For more information, see Analyzing Log Data with CloudWatch Logs Insights.

To update a query definition, specify its queryDefinitionId in your request. The values of name, queryString, and logGroupNames are changed to the values that you specify in your update operation. No current values are retained from the current query definition. For example, if you update a current query definition that includes log groups, and you don't specify the logGroupNames parameter in your update operation, the query definition changes to contain no log groups.

You must have the logs:PutQueryDefinition permission to be able to perform this operation.

#put_resource_policy(options = {}) ⇒ Types::PutResourcePolicyResponse

Creates or updates a resource policy allowing other AWS services to put log events to this account, such as Amazon Route 53. An account can have up to 10 resource policies per AWS Region.

#put_retention_policy(options = {}) ⇒ Struct

Sets the retention of the specified log group. A retention policy allows you to configure the number of days for which to retain log events in the specified log group.

#put_subscription_filter(options = {}) ⇒ Struct

Creates or updates a subscription filter and associates it with the specified log group. Subscription filters allow you to subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the gzip format.

The following destinations are supported for subscription filters:

There can only be one subscription filter associated with a log group. If you are updating an existing filter, you must specify the correct name in filterName. Otherwise, the call fails because you cannot associate a second filter with a log group.

To perform a PutSubscriptionFilter operation, you must also have the iam:PassRole permission.

#start_query(options = {}) ⇒ Types::StartQueryResponse

Schedules a query of a log group using CloudWatch Logs Insights. You specify the log group and time range to query and the query string to use.

For more information, see CloudWatch Logs Insights Query Syntax.

Queries time out after 15 minutes of execution. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries.

#stop_query(options = {}) ⇒ Types::StopQueryResponse

Stops a CloudWatch Logs Insights query that is in progress. If the query has already ended, the operation returns an error indicating that the specified query is not running.

#tag_log_group(options = {}) ⇒ Struct #test_metric_filter(options = {}) ⇒ Types::TestMetricFilterResponse

Tests the filter pattern of a metric filter against a sample of log event messages. You can use this operation to validate the correctness of a metric filter pattern.

#untag_log_group(options = {}) ⇒ Struct

Removes the specified tags from the specified log group.

To list the tags for a log group, use ListTagsLogGroup. To add tags, use TagLogGroup.

#wait_until(waiter_name, params = {}) {|waiter| ... } ⇒ Boolean

Waiters polls an API operation until a resource enters a desired state.

Basic Usage

Waiters will poll until they are succesful, they fail by entering a terminal state, or until a maximum number of attempts are made.

# polls in a loop, sleeping between attempts client.waiter_until(waiter_name, params)

Configuration

You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You configure waiters by passing a block to #wait_until:

# poll for ~25 seconds
client.wait_until(...) do |w|
  w.max_attempts = 5
  w.delay = 5
end
Callbacks

You can be notified before each polling attempt and before each delay. If you throw :success or :failure from these callbacks, it will terminate the waiter.

started_at = Time.now
client.wait_until(...) do |w|

  # disable max attempts
  w.max_attempts = nil

  # poll for 1 hour, instead of a number of attempts
  w.before_wait do |attempts, response|
    throw :failure if Time.now - started_at > 3600
  end

end
Handling Errors

When a waiter is successful, it returns true. When a waiter fails, it raises an error. All errors raised extend from Waiters::Errors::WaiterFailed.

begin
  client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
  # resource did not enter the desired state in time
end
#waiter_names ⇒ Array<Symbol>

Returns the list of supported waiters. The following table lists the supported waiters and the client method they call:

Waiter Name Client Method Default Delay: Default Max Attempts:

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4