This page explains how to use GKE usage metering to understand the usage profiles of Google Kubernetes Engine (GKE) Standard clusters, and tie usage to individual teams or business units within your organization. GKE usage metering has no impact on billing for your project; it lets you understand resource usage at a granular level.
Note: We recommend that you use GKE cost allocation instead of GKE usage metering. GKE cost allocation lets you distribute the costs of a cluster to its users. OverviewGKE usage metering tracks information about the resource requests and actual resource usage of your cluster's workloads. Currently, GKE usage metering tracks information about CPU, GPU, TPU, memory, storage, and optionally network egress. You can differentiate resource usage by using Kubernetes namespaces, labels, or a combination of both.
Data is stored in BigQuery, where you can query it directly or export it for analysis with external tools such as Looker Studio.
GKE usage metering is helpful for scenarios such as the following:
Before you start, make sure that you have performed the following tasks:
gcloud components update
. Note: For existing gcloud CLI installations, make sure to set the compute/region
property. If you use primarily zonal clusters, set the compute/zone
instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location
. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.You can use the sample BigQuery queries and Looker Studio template to join GKE usage metering data with exported Google Cloud billing data in BigQuery. This lets you estimate a cost breakdown by cluster, namespace, and labels.
GKE usage metering data is purely advisory, and doesn't affect your Google Cloud bill. For billing data, your Google Cloud billing invoice is the sole source of truth.
The following limitations apply:
StorageClass
objects are not supported.Before you use GKE usage metering, you must meet the following prerequisites:
gcloud
command is required. Use gcloud --version
to check.To enable GKE usage metering, you first create a BigQuery dataset for either a single cluster, multiple clusters in the project, or the entire project. For more information about choosing a mapping between datasets and clusters, see Choosing one or more BigQuery datasets.
Next, you enable GKE usage metering when creating a new cluster or by modifying an existing cluster.
Optionally, you can create a Looker Studio dashboard to visualize the resource usage of your clusters.
Create the BigQuery datasetTo use GKE usage metering for clusters in your Google Cloud project, you first create the BigQuery dataset, and then configure clusters to use it. You can use a single BigQuery dataset to store information about resource usage for multiple clusters in the same project.
Visit Creating Datasets for more details. Set the Default table expiration
for the dataset to Never
so that the table doesn't expire. If a table expires, it is recreated automatically as an empty table.
service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com
) and with the Kubernetes Engine Service Agent role. Warning: If you delete a BigQuery dataset or table that a cluster is using to log GKE usage metering data, Cloud Logging shows transient warnings such as Failed to upload a record to BigQuery
. To resolve the warning, re-create the dataset or configure the cluster to use a different dataset. Your historical data will be lost. Enable GKE usage metering for a cluster
You can enable GKE usage metering on a new or existing cluster by using either the gcloud
command or the Google Cloud console.
Enabling GKE usage metering also enables resource consumption metering by default. To selectively disable resource consumption metering while continuing to track resource requests, see the specific instructions for enabling GKE usage metering using the gcloud
command, in this topic.
Network egress metering is disabled by default. To enable it, see the caveats and instructions in Optional: Enabling network egress metering in this topic.
Create a new clusterYou can create a cluster by using the gcloud CLI or the Google Cloud console.
gcloudTo create a cluster with GKE usage metering enabled, run the following command:
gcloud container clusters create CLUSTER_NAME \
--resource-usage-bigquery-dataset RESOURCE_USAGE_DATASET
Replace the following:
CLUSTER_NAME
: the name of your GKE cluster.RESOURCE_USAGE_DATASET
: the name of your BigQuery dataset.Resource consumption metering is enabled by default. To disable it and only track resource requests, add the flag --no-enable-resource-consumption- metering
to the preceding command. You also need to modify the example queries in the rest of this topic so that they do not query for resource consumption.
If needed, the required tables are created within the BigQuery dataset when the cluster starts.
ConsoleTo create a cluster with GKE usage metering enabled:
Note: When using the Google Cloud console, it is not possible to enable GKE usage metering while selectively disabling resource consumption metering. If you need to do this, use thegcloud
instructions instead.
In the Google Cloud console, go to the Create a Kubernetes cluster page.
From the navigation pane, under Cluster, click Features.
Select Enable GKE usage metering.
Enter the name of your BigQuery dataset.
Optional: select Enable network egress metering after reviewing the caveats and instructions in Optional: Enabling network egress metering.
Continue configuring your cluster, then click Create.
To enable GKE usage metering on an existing cluster, run the following command:
gcloud container clusters update CLUSTER_NAME \
--resource-usage-bigquery-dataset RESOURCE_USAGE_DATASET
Resource consumption metering is enabled by default. To disable it and only track resource requests, add the flag --no-enable-resource-consumption- metering
to the preceding command. You also need to modify the example queries in the rest of this topic so that they do not query for resource consumption.
You can also change the dataset an existing cluster uses to store its usage metering data by changing the value of the --resource-usage-bigquery-dataset
flag.
If needed, a table is created within the BigQuery dataset when the cluster is updated.
Console Note: When using Google Cloud console, it is not possible to enable GKE usage metering while selectively disabling resource consumption metering. If you need to do this, use thegcloud
instructions instead.
Go to the Google Kubernetes Engine page in Google Cloud console.
Next to the cluster you want to modify, click more_vert Actions, then click edit Edit.
Under Features, click edit Edit next to GKE usage metering.
Select Enable GKE usage metering.
Enter the name of the BigQuery dataset.
Optional: select Enable network egress metering after reviewing the caveats and instructions in Optional: Enabling network egress metering.
Click Save Changes..
By default, network egress data is not collected or exported. Measuring network egress requires a network metering agent (NMA) running on each node. The NMA runs as a privileged Pod, consumes some resources on the node (CPU, memory, and disk space), and enables the nf_conntrack_acct sysctl flag on the kernel (for connection tracking flow accounting).
If you are comfortable with these caveats, you can enable network egress tracking for use with GKE usage metering. To enable network egress tracking, include the --enable-network-egress-metering
option when creating or updating your cluster, or select Enable network egress metering when enabling GKE usage metering in the Google Cloud console.
To disable network egress metering, add the flag --no-enable-network-egress-metering
when updating your cluster with the command line. Alternatively, you can clear Enable network egress metering in the GKE usage metering section of the cluster in the Google Cloud console.
To verify that GKE usage metering is enabled on a cluster, and to confirm which BigQuery dataset stores the cluster's resource usage data, run the following command:
gcloud container clusters describe CLUSTER_NAME \
--format="value(resourceUsageExportConfig)"
The output is empty if GKE usage metering is not enabled, and otherwise shows the BigQuery dataset used by the cluster, as in the following example output:
bigqueryDestination={u'datasetId': u'test_usage_metering_dataset'}
Choose one or more BigQuery datasets
A dataset can hold GKE usage metering data for one or more clusters in your project. Whether you use one or many datasets depends on your security needs:
You can visualize your GKE usage metering data using a Looker Studio dashboard. This lets you filter your data by cluster name, namespace, or label. You can also adjust the reporting period dynamically. If you have experience with Looker Studio and BigQuery, you can create a customized dashboard. You can also clone a dashboard that we created specifically for GKE usage metering.
Note: You may see discrepancies between GKE usage metering data and Cloud Billing data, due to upload latency. Batches of Cloud Billing data take up to 5 hours to appear in BigQuery, while GKE usage metering data appears in BigQuery roughly every hour.You can use the dashboard to visualize resource requests and consumption on your clusters over time.
Note: Looker Studio is not supported by Cloud Customer Care. For more information, see the Looker Studio Help Center. PrerequisitesEnable Exporting Google Cloud billing data to BigQuery if it is not already enabled.
During this process, you create a dataset, but the table within the dataset can take up to 5 hours to appear and start populating. When the table appears, its name is gcp_billing_export_v1_BILLING_ACCOUNT_ID
.
Enable GKE usage metering on at least one cluster in the project. Note the name you chose for the BigQuery dataset.
Enable Looker Studio if it's not already enabled.
Gather the following information, which is needed to configure the dashboard:
Ensure that you have version 2.0.58 or later of the BigQuery CLI. To check the version, run bq version
, and gcloud components update
to update your BigQuery CLI.
The commands in this section should be run in a Linux terminal or in Cloud Shell.
Download one of the following query templates:
this template
.this template
.If you are using Cloud Shell, copy this file into the directory where you perform the following commands.
Run the following command to set environment variables:
export GCP_BILLING_EXPORT_TABLE_FULL_PATH=YOUR_BILLING_EXPORT_TABLE_PATH
export USAGE_METERING_PROJECT_ID=YOUR_USAGE_METERING_PROJECT_ID
export USAGE_METERING_DATASET_ID=YOUR_USAGE_METERING_DATASET_ID
export USAGE_METERING_START_DATE=YOUR_USAGE_METERING_START_DATE
export COST_BREAKDOWN_TABLE_ID=YOUR_COST_BREAKDOWN_TABLE_ID
export USAGE_METERING_QUERY_TEMPLATE=YOUR_TEMPLATE_PATH
export USAGE_METERING_QUERY=YOUR_RENDERED_QUERY_PATH
Replace the following:
YOUR_BILLING_EXPORT_TABLE_PATH
: the path to your generated billing export table. This table has a name similar to PROJECT_ID.DATASET_ID.gcp_billing_export_v1_xxxx
.YOUR_USAGE_METERING_PROJECT_ID
: the name of your Google Cloud project.YOUR_USAGE_METERING_DATASET_ID
: the name of the dataset you created in BigQuery, such as all_billing_data
.YOUR_USAGE_METERING_START_DATE
: the start date of your query in the form YYYY-MM-DD
.YOUR_COST_BREAKDOWN_TABLE_ID
: the name of a new table that you chose, such as usage_metering_cost_breakdown
. This table is used as input to Looker Studio.YOUR_TEMPLATE_PATH
: the name of the query template you downloaded, either usage_metering_query_template_request_and_consumption.sql
or usage_metering_query_template_request_only.sql
.YOUR_RENDERED_QUERY_PATH
: the name of the path for the rendered query that you choose, such as cost_breakdown_query.sql
.As an example, your environment variables might resemble the following:
export GCP_BILLING_EXPORT_TABLE_FULL_PATH=my-billing-project.all_billing_data.gcp_billing_export_v1_xxxx
export USAGE_METERING_PROJECT_ID=my-billing-project
export USAGE_METERING_DATASET_ID=all_billing_data
export USAGE_METERING_START_DATE=2022-05-01
export COST_BREAKDOWN_TABLE_ID=usage_metering_cost_breakdown
export USAGE_METERING_QUERY_TEMPLATE=usage_metering_query_template_request_only.sql
export USAGE_METERING_QUERY=cost_breakdown_query.sql
Render the query from the template:
sed \
-e "s/\${fullGCPBillingExportTableID}/$GCP_BILLING_EXPORT_TABLE_FULL_PATH/" \
-e "s/\${projectID}/$USAGE_METERING_PROJECT_ID/" \
-e "s/\${datasetID}/$USAGE_METERING_DATASET_ID/" \
-e "s/\${startDate}/$USAGE_METERING_START_DATE/" \
"$USAGE_METERING_QUERY_TEMPLATE" \
> "$USAGE_METERING_QUERY"
Create a new cost breakdown table that refreshes every 24 hours:
bq query \
--project_id=$USAGE_METERING_PROJECT_ID \
--use_legacy_sql=false \
--destination_table=$USAGE_METERING_DATASET_ID.$COST_BREAKDOWN_TABLE_ID \
--schedule='every 24 hours' \
--display_name="GKE Usage Metering Cost Breakdown Scheduled Query" \
--replace=true \
"$(cat $USAGE_METERING_QUERY)"
For more information about scheduling queries, see Set up scheduled queries.
Paste the following query into the Query Editor:
SELECT
*
FROM
`USAGE_METERING_PROJECT_ID.USAGE_METERING_DATASET_ID.COST_BREAKDOWN_TABLE_ID`
Click Connect.
The dashboard is created, and you can access it at any time in the list of Looker Studio reports for your project.
Use the Looker Studio dashboardThe dashboard contains multiple reports:
You can change pages using the navigation menu. You can change the timeframe for a page using the date picker. To share the report with members of your organization, or to revoke access, click person_add_alt Share Report.
After you copy the report into your project, you can customize it by using the Looker Studio report editor. Even if the report template provided by Google changes, your copy is unaffected.
Explore GKE usage metering data using BigQueryTo view data about resource requests using BigQuery, query the gke_cluster_resource_usage
table within the relevant BigQuery dataset.
To view data about actual resource consumption, query the gke_cluster_resource_consumption
table. Network egress consumption data remains in the gke_cluster_resource_usage
because there is no concept of resource requests for egresses.
For more information about using queries in BigQuery, see Running queries. The fields in the schema are stable, though more fields may be added in the future.
These queries are simple examples. Customize your query to find the data you need.
Query for resource requestsSELECT
cluster_name,
labels,
usage
FROM
'CLUSTER_GCP_PROJECT.USAGE_METERING_DATASET.gke_cluster_resource_usage'
WHERE
namespace="NAMESPACE"
Query for resource consumption
SELECT
cluster_name,
labels,
usage
FROM
'CLUSTER_GCP_PROJECT.USAGE_METERING_DATASET.gke_cluster_resource_consumption'
WHERE
namespace="NAMESPACE"
Replace the following:
CLUSTER_GCP_PROJECT
: the name of your Google Cloud project that contains the cluster that you want to query.USAGE_METERING_DATASET
: the name of your usage metering table.NAMESPACE
: the name of your namespace.Expand the following sections to see more sophisticated examples.
How to query costs, broken down by namespaceThese queries ignore a cluster's resource usage when the billing information of the associated cloud resource has not yet been exported to the Google Cloud billing export dataset. This delay happens when the time window of a cloud resource usage record is ahead of the latest record in the exported Google Cloud billing data. The latency for billing export can be up to 5 hours.
Resource requestsSELECT resource_usage.cluster_name, resource_usage.cluster_location, resource_usage.namespace, resource_usage.resource_name, resource_usage.sku_id, MIN(resource_usage.start_time) AS usage_start_time, MAX(resource_usage.end_time) AS usage_end_time, SUM(resource_usage.usage.amount * gcp_billing_export.rate) AS cost FROM 'CLUSTER_GCP_PROJECT.USAGE_METERING_DATASET.gke_cluster_resource_usage' AS resource_usage LEFT JOIN ( SELECT sku.id AS sku_id, SUM(cost) / SUM(usage.amount) AS rate, MIN(usage_start_time) AS min_usage_start_time, MAX(usage_end_time) AS max_usage_end_time FROM 'CLUSTER_GCP_PROJECT.BILLING_DATASET.BILLING_TABLE' WHERE project.id = "CLUSTER_GCP_PROJECT" GROUP BY sku_id) AS gcp_billing_export ON resource_usage.sku_id = gcp_billing_export.sku_id WHERE resource_usage.start_time >= gcp_billing_export.min_usage_start_time AND resource_usage.end_time <= gcp_billing_export.max_usage_end_time GROUP BY resource_usage.cluster_name, resource_usage.cluster_location, resource_usage.namespace, resource_usage.resource_name, resource_usage.sku_idResource consumption
SELECT resource_usage.cluster_name, resource_usage.cluster_location, resource_usage.namespace, resource_usage.resource_name, resource_usage.sku_id, MIN(resource_usage.start_time) AS usage_start_time, MAX(resource_usage.end_time) AS usage_end_time, SUM(resource_usage.usage.amount * gcp_billing_export.rate) AS cost FROM 'CLUSTER_GCP_PROJECT.USAGE_METERING_DATASET.gke_cluster_resource_consumption' AS resource_usage LEFT JOIN ( SELECT sku.id AS sku_id, SUM(cost) / SUM(usage.amount) AS rate, MIN(usage_start_time) AS min_usage_start_time, MAX(usage_end_time) AS max_usage_end_time FROM 'CLUSTER_GCP_PROJECT.BILLING_DATASET.BILLING_TABLE' WHERE project.id = "CLUSTER_GCP_PROJECT" GROUP BY sku_id) AS gcp_billing_export ON resource_usage.sku_id = gcp_billing_export.sku_id WHERE resource_usage.start_time >= gcp_billing_export.min_usage_start_time AND resource_usage.end_time <= gcp_billing_export.max_usage_end_time GROUP BY resource_usage.cluster_name, resource_usage.cluster_location, resource_usage.namespace, resource_usage.resource_name, resource_usage.sku_idHow to query costs in a specific time period, broken down by namespace and labels
These queries show the costs for a specific time period, by namespaces labels.
Resource requests by time period, grouped by namespaces and labelsDECLARE drilldown_label STRING DEFAULT 'DRILLDOWN_LABEL'; DECLARE project_id STRING default "CLUSTER_GCP_PROJECT"; SELECT DATE(start_time) as date, resource_usage.cluster_name, resource_usage.cluster_location, resource_usage.namespace, label.value AS label_value, resource_usage.resource_name, resource_usage.sku_id, gcp_billing_export.sku_description, SUM(resource_usage.usage.amount) AS usage, resource_usage.usage.unit AS usage_unit, SUM(resource_usage.usage.amount * gcp_billing_export.rate) AS cost_estimate, gcp_billing_export.currency FROM 'CLUSTER_GCP_PROJECT.USAGE_METERING_DATASET.gke_cluster_resource_usage' AS resource_usage -- select only workloads matching the defined "drilldown_label" CROSS JOIN UNNEST(labels) AS label ON label.key=drilldown_label -- join with billing table to get pricing information and sku description LEFT JOIN ( SELECT DATE(usage_start_time) as date, sku.id AS sku_id, sku.description AS sku_description, SAFE_DIVIDE(SUM(cost), SUM(usage.amount)) AS rate, currency FROM 'CLUSTER_GCP_PROJECT.BILLING_DATASET.BILLING_TABLE' WHERE project.id = project_id GROUP BY date, sku_id, sku_description, currency) AS gcp_billing_export ON DATE(resource_usage.start_time) = gcp_billing_export.date AND resource_usage.sku_id = gcp_billing_export.sku_id GROUP BY date, cluster_name, cluster_location, namespace, label_value, resource_name, sku_id, sku_description, usage_unit, currency ORDER BY date, cluster_name, cluster_location, namespace, label_value, resource_nameResource consumption by time period, grouped by namespaces and labels
DECLARE drilldown_label STRING DEFAULT 'DRILLDOWN_LABEL'; DECLARE project_id STRING default "CLUSTER_GCP_PROJECT"; SELECT DATE(start_time) as date, resource_usage.cluster_name, resource_usage.cluster_location, resource_usage.namespace, label.value AS label_value, resource_usage.resource_name, resource_usage.sku_id, gcp_billing_export.sku_description, SUM(resource_usage.usage.amount) AS usage, resource_usage.usage.unit AS usage_unit, SUM(resource_usage.usage.amount * gcp_billing_export.rate) AS cost_estimate, gcp_billing_export.currency FROM 'CLUSTER_GCP_PROJECT.USAGE_METERING_DATASET.gke_cluster_resource_consumption' AS resource_usage -- select only workloads matching the defined "drilldown_label" CROSS JOIN UNNEST(labels) AS label ON label.key=drilldown_label -- join with billing table to get pricing information and sku description LEFT JOIN ( SELECT DATE(usage_start_time) as date, sku.id AS sku_id, sku.description AS sku_description, SAFE_DIVIDE(SUM(cost), SUM(usage.amount)) AS rate, currency FROM 'CLUSTER_GCP_PROJECT.BILLING_DATASET.BILLING_TABLE' WHERE project.id = project_id GROUP BY date, sku_id, sku_description, currency) AS gcp_billing_export ON DATE(resource_usage.start_time) = gcp_billing_export.date AND resource_usage.sku_id = gcp_billing_export.sku_id GROUP BY date, cluster_name, cluster_location, namespace, label_value, resource_name, sku_id, sku_description, usage_unit, currency ORDER BY date, cluster_name, cluster_location, namespace, label_value, resource_nameHow to show actual resource consumption as a percentage of resource requests, over time
This query shows the ratio of actual CPU consumption to CPU requests, so you can see which Pods request too little or too much CPU compared to actual requirements.
WITH constraints AS ( SELECT TIMESTAMP(START_TIME) as min_time, TIMESTAMP(END_TIME) as max_time, "CLUSTER_GCP_PROJECT" as project_id ), request_based_amount_by_namespace AS ( SELECT namespace, resource_name, SUM(usage.amount) AS requested_amount, usage.unit AS unit FROM 'CLUSTER_GCP_PROJECT.USAGE_METERING_DATASET.gke_cluster_resource_usage' as requested_resource_usage INNER JOIN constraints ON requested_resource_usage.start_time >= constraints.min_time AND requested_resource_usage.end_time <= constraints.max_time AND requested_resource_usage.project.id = constraints.project_id GROUP BY namespace, resource_name, usage.unit ), consumption_based_amount_by_namespace AS ( SELECT namespace, resource_name, SUM(usage.amount) AS consumed_amount, usage.unit AS unit FROM 'CLUSTER_GCP_PROJECT.USAGE_METERING_DATASET.gke_cluster_resource_consumption' as consumed_resource_usage INNER JOIN constraints ON consumed_resource_usage.start_time >= constraints.min_time AND consumed_resource_usage.end_time <= constraints.max_time AND consumed_resource_usage.project.id = constraints.project_id GROUP BY namespace, resource_name, usage.unit ) SELECT request_based_amount_by_namespace.namespace, request_based_amount_by_namespace.resource_name, requested_amount, consumed_amount, request_based_amount_by_namespace.unit, CASE WHEN consumed_amount IS NULL THEN NULL WHEN requested_amount = 0 THEN NULL ELSE consumed_amount / requested_amount END AS consumption_to_request_ratio FROM request_based_amount_by_namespace FULL JOIN consumption_based_amount_by_namespace ON request_based_amount_by_namespace.namespace = consumption_based_amount_by_namespace.namespace AND request_based_amount_by_namespace.resource_name = consumption_based_amount_by_namespace.resource_name ORDER BY consumption_to_request_ratio DESCGKE usage metering schema in BigQuery
The following table describes the schema for the GKE usage metering tables in the BigQuery dataset. If your cluster is running a version of GKE that supports resource consumption metering and resource requests, an additional table is created with the same schema.
Field Type Descriptioncluster_location
STRING
The name of the Compute Engine zone or region in which the GKE cluster resides. cluster_name
STRING
The name of the GKE cluster. namespace
STRING
The Kubernetes namespace from which the usage is generated. resource_name
STRING
The name of the resource, such as "cpu", "memory", and "storage". sku_id
STRING
The SKU ID of the underlying Google Cloud cloud resource. start_time
TIMESTAMP
The UNIX timestamp of when the usage began. end_time
TIMESTAMP
The UNIX timestamp of when the usage ended. fraction
FLOAT
The fraction of a cloud resource used by the usage. For a dedicated cloud resource that is solely used by a single namespace, the fraction is always 1.0. For resources shared among multiple namespaces, the fraction is calculated as the requested amount divided by the total capacity of the underlying cloud resource. cloud_resource_size
INTEGER
The size of the underlying Google Cloud resource. For example, the size of vCPUs on a n1-standard-2 instances is 2. labels.key
STRING
The key of a Kubernetes label associated with the usage. labels.value
STRING
The value of a Kubernetes label associated with the usage. project.id
STRING
The ID of the project in which the GKE cluster resides. usage.amount
FLOAT
The quantity of usage.unit
used. usage.unit
STRING
The base unit in which resource usage is measured. For example, the base unit for standard storage is byte-seconds.
The units for GKE usage metering must be interpreted in the following way:
The CPU usage.unit
is seconds, which is the total CPU time that a Pod requested or utilized. For example, if we have two Pods that each request 30 CPU and run for 15 minutes then the aggregate amount of the request table is 54,000 seconds (2 Pods * 30 CPU * 15 minutes * 60 seconds / minute).
The memory usage.unit
is bytes-seconds, which is the integral of memory over time that a Pod requested or utilized. For example, if we have two Pods that each request 30 GiB and run for 15 minutes then the aggregate amount of the request table is 5.798+13 byte-seconds (2 Pods * 30 GiB * 15 minutes * 60 seconds / minute * 1073741824 bytes / GiB).
There are two conditions when GKE usage metering writes usage records to BigQuery metrics:
succeeded
or failed
, or when the Pod is deleted.The hourly schedule's timestamp to write records is reached while the Pod is still running.
GKE usage metering generates an hourly schedule where it writes Pod usage records to BigQuery for all currently running Pods. The schedule's timestamp is not the same across all clusters.
If you have multiple Pods running at that timestamp, you'll find multiple usage records with the same end_time
. These usage records' end_time
indicate the hourly schedule's timestamp.
Also, if you have multiple Pods that have been running for multiple hours, you also have a set of usage records with an end_time
that matches the start_time
of another set of usage records.
To disable GKE usage metering on a cluster, run the following command:
gcloud container clusters update CLUSTER_NAME \
--clear-resource-usage-bigquery-dataset
Note: The BigQuery dataset is preserved. You can remove it if you don't need the data and no cluster is using it. Console
Go to the Google Kubernetes Engine page in Google Cloud console.
Next to the cluster you want to modify, click more_vert Actions, then click edit Edit.
Under Features, click edit Edit next to GKE usage metering.
Clear Enable GKE usage metering.
Click Save Changes.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4