A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/stackdriver/docs/observability/pricing-optimize-and-monitor below:

Optimize and monitor Google Cloud Observability costs

This page describes how you can optimize and monitor your Google Cloud Observability costs. For pricing information, see Google Cloud Observability pricing.

You might also be interested in the following documents:

Optimize

This section provides guidance about how to reduce or optimize costs associated with Cloud Logging, Cloud Trace, and Google Cloud Managed Service for Prometheus.

Reduce your logs storage

To reduce your Cloud Logging storage costs, configure exclusion filters on your log sinks to exclude certain logs from being routed. Exclusion filters can remove all log entries that match the filter, or they can remove only a percentage of the logs. When a log entry matches an exclusion filter of a sink, the sink doesn't route the log entry to the destination. Excluded log entries don't count against your storage allotment. For instructions on setting exclusion filters, see Logs exclusions.

Another option to reducing your Cloud Logging storage costs is to route logs out of Cloud Logging to a supported destination. Cloud Logging doesn't charge to route logs to supported destinations. However, you might be charged when logs are received by a destination:

For information about routing logs out of Cloud Logging, see Route logs to supported destinations.

Optimize costs for Managed Service for Prometheus

Pricing for Managed Service for Prometheus is designed to be controllable. Because you are charged on a per-sample basis, you can use the following levers to control costs:

Pricing for Managed Service for Prometheus is designed to be predictable.

Queries, including alert queries

All queries issued by the user, including queries issued when Prometheus recording rules are run, are charged through Cloud Monitoring API calls. For the current rate, see the Cloud Monitoring sections of the Google Cloud Observability pricing page.

Reduce your trace usage

To control Trace span ingestion volume, you can manage your trace sampling rate to balance how many traces you need for performance analysis with your cost tolerance.

For high-traffic systems, most customers can sample at 1 in 1,000 transactions, or even 1 in 10,000 transactions, and still have enough information for performance analysis.

Sampling rate is configured with the Cloud Trace client libraries.

Reduce your alerting bill

Starting no sooner than May 1, 2026, Cloud Monitoring will begin charging for the use of alerting policies. For information about the pricing model, see

Google Cloud Observability pricing

.

This document describes strategies you can use to reduce costs for alerting.

Consolidate alerting policies to operate over more resources

Because of the $0.10-per-condition cost, it is more cost effective to use one alerting policy to monitor multiple resources than it is to use one alerting policy to monitor each resource. Consider the following examples:

Example 1

Data

Alerting policy Resulting costs

Example 2

Data

Alerting policies Resulting costs

In both examples, you monitor the same number of resources. However, Example 2 uses 100 alerting policies, while Example 1 uses only one alerting policy. As a result, Example 1 is almost $10 cheaper per month.

Aggregate to only the level that you need to alert on

Aggregating to higher levels of granularity results in higher costs than aggregating to lower levels of granularity. For example, aggregating to the Google Cloud project level is cheaper than aggregating to the cluster level, and aggregating to the cluster level is cheaper than aggregating to the cluster and namespace level.

Consider the following examples:

Example 1

Data

Alerting policy Resulting costs

Example 4

Data

Alerting policy Resulting costs

Example 5

Data

Alerting policy Resulting costs

Compare Example 1 to Example 4: Both examples operate over the same underlying data and have a single alerting policy. However, because the alerting policy in Example 4 aggregates to the service, it is less expensive than the alerting policy in Example 1, which aggregates more granularly to the VM.

In addition, compare Example 1 to Example 5: In this case, the metric cardinality in Example 5 is 10,000 times higher than the metric cardinality in Example 1. However, because the alerting policy in Example 1 and in Example 5 both aggregate to the VM, and because the number of VMs is the same in both examples, the examples are equivalent in price.

When you configure your alerting policies, choose aggregation levels that work best for your use case. For example, if you care about alerting on CPU utilization, then you might want to aggregate to the VM and CPU level. If you care about alerting on latency by endpoint, then you might want to aggregate to the endpoint level.

Don't alert on raw, unaggregated data

Monitoring uses a dimensional metrics system, where any metric has total cardinality equal to the number of resources monitored multiplied by the number of label combinations on that metric. For example, if you have 100 VMs emitting a metric, and that metric has 10 labels with 10 values each, then your total cardinality is 100 * 10 * 10 = 10,000.

As a result of how cardinality scales, alerting on raw data can be extremely expensive. In the previous example, you have 10,000 time series returned for each execution period. However, if you aggregate to the VM, then you have only 100 time series returned per execution period, regardless of the label cardinality of the underlying data.

Alerting on raw data also puts you at risk for increased time series when your metrics receive new labels. In the previous example, if a user adds a new label to your metric, then your total cardinality increases to 100 * 11 * 10 = 11,000 time series. In this case, your number of returned time series increases by 1,000 each execution period even though your alerting policy is unchanged. If you instead aggregate to the VM, then, despite the increased underlying cardinality, you still have only 100 time series returned.

Configure your conditions to evaluate only data that's necessary for your alerting needs. If you wouldn't take action to fix something, then exclude it from your alerting policies. For example, you probably don't need to alert on an intern's development VM.

To reduce unnecessary incidents and costs, you can filter out time series that aren't important. You can use Google Cloud metadata labels to tag assets with categories and then filter out the unneeded metadata categories.

Use top-stream operators to reduce the number of time series returned

If your condition uses a PromQL query, then you can use a top-streams operator to select a number of the time series returned with the highest values:

For example, a topk(metric, 5) clause in a PromQL query limits the number of time series returned to five in each execution period.

Limiting to a top number of time series might result in missing data and faulty incidents, such as:

To mitigate such risks, choose large values for N and use the top-streams operator only in alerting policies that evaluate many time series, such as incidents for individual Kubernetes containers.

Increase the length of the execution period (PromQL only)

If your condition uses a PromQL query, then you can modify the length of your execution period by setting the evaluationInterval field in the condition.

Longer evaluation intervals result in fewer time series returned per month; for example, a condition query with a 15-second interval runs twice as often as a query with a 30-second interval, and a query with a 1-minute interval runs half as often as a query with a 30-second interval.

Note: Other alerting policy conditions are fixed to a 30-second execution period. Monitor

This section describes how to monitor your costs by creating alerting policies. An alerting policy can monitor metric data and notify you when that data crosses a threshold.

Monitor monthly log bytes ingested

To create an alerting policy that triggers when the number of log bytes written to your log buckets exceeds your user-defined limit for Cloud Logging, use the following settings.

Steps to create an alerting policy.

To create an alerting policy, do the following:

  1. In the Google Cloud console, go to the notifications Alerting page:

    Go to Alerting

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. If you haven't created your notification channels and if you want to be notified, then click Edit Notification Channels and add your notification channels. Return to the Alerting page after you add your channels.
  3. From the Alerting page, select Create policy.
  4. To select the resource, metric, and filters, expand the Select a metric menu and then use the values in the New condition table:
    1. Optional: To limit the menu to relevant entries, enter the resource or metric name in the filter bar.
    2. Select a Resource type. For example, select VM instance.
    3. Select a Metric category. For example, select instance.
    4. Select a Metric. For example, select CPU Utilization.
    5. Select Apply.
  5. Click Next and then configure the alerting policy trigger. To complete these fields, use the values in the Configure alert trigger table.
  6. Click Next.
  7. Optional: To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.

    To be notified when incidents are openend and closed, check Notify on incident closure. By default, notifications are sent only when incidents are openend.

  8. Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  9. Optional: Click Documentation, and then add any information that you want included in a notification message.
  10. Click Alert name and enter a name for the alerting policy.
  11. Click Create Policy.
New condition
Field
Value Resource and Metric In the Resources menu, select Global.
In the Metric categories menu, select Logs-based metric.
In the Metrics menu, select Monthly log bytes ingested. Filter None. Across time series
Time series aggregation
sum
Rolling window 60 m
Rolling window function max Configure alert trigger
Field
Value Condition type Threshold Alert trigger Any time series violates Threshold position Above threshold Threshold value You determine the acceptable value. Retest window Minimum acceptable value is 30 minutes. Monitor total metrics ingested

It can't create an alert based on the monthly metrics ingested. However, you can create an alert for your Cloud Monitoring costs. For information, see Configure a billing alert.

Monitor monthly trace spans ingested

To create an alerting policy that triggers when your monthly Cloud Trace spans ingested exceeds a user-defined limit, use the following settings.

Steps to create an alerting policy.

To create an alerting policy, do the following:

  1. In the Google Cloud console, go to the notifications Alerting page:

    Go to Alerting

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. If you haven't created your notification channels and if you want to be notified, then click Edit Notification Channels and add your notification channels. Return to the Alerting page after you add your channels.
  3. From the Alerting page, select Create policy.
  4. To select the resource, metric, and filters, expand the Select a metric menu and then use the values in the New condition table:
    1. Optional: To limit the menu to relevant entries, enter the resource or metric name in the filter bar.
    2. Select a Resource type. For example, select VM instance.
    3. Select a Metric category. For example, select instance.
    4. Select a Metric. For example, select CPU Utilization.
    5. Select Apply.
  5. Click Next and then configure the alerting policy trigger. To complete these fields, use the values in the Configure alert trigger table.
  6. Click Next.
  7. Optional: To add notifications to your alerting policy, click Notification channels. In the dialog, select one or more notification channels from the menu, and then click OK.

    To be notified when incidents are openend and closed, check Notify on incident closure. By default, notifications are sent only when incidents are openend.

  8. Optional: Update the Incident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  9. Optional: Click Documentation, and then add any information that you want included in a notification message.
  10. Click Alert name and enter a name for the alerting policy.
  11. Click Create Policy.
New condition
Field
Value Resource and Metric In the Resources menu, select Global.
In the Metric categories menu, select Billing.
In the Metrics menu, select Monthly trace spans ingested. Filter Across time series
Time series aggregation
sum
Rolling window 60 m
Rolling window function max Configure alert trigger
Field
Value Condition type Threshold Alert trigger Any time series violates Threshold position Above threshold Threshold value You determine the acceptable value. Retest window Minimum acceptable value is 30 minutes. Configure a billing alert

To be notified if your billable or forecasted charges exceed a budget, create an alert by using the Budgets and alerts page of the Google Cloud console:

  1. In the Google Cloud console, go to the Billing page:

    Go to Billing

    You can also find this page by using the search bar.

    If you have more than one Cloud Billing account, then do one of the following:

  2. In the Billing navigation menu, select Budgets & alerts.
  3. Click add_box Create budget.
  4. Complete the budget dialog. In this dialog, you select Google Cloud projects and products, and then you create a budget for that combination. By default, you are notified when you reach 50%, 90%, and 100% of the budget. For complete documentation, see Set budgets and budget alerts. Note: In the billing dialog, use the following settings:

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4