This page describes how to configure a Google Kubernetes Engine (GKE) cluster to send metrics emitted by the Kubernetes API server, Scheduler, and Controller Manager to Cloud Monitoring using Google Cloud Managed Service for Prometheus. This page also describes how these metrics are formatted when they are written to Monitoring, and how to query metrics.
Before you beginBefore you start, make sure that you have performed the following tasks:
gcloud components update
. Note: For existing gcloud CLI installations, make sure to set the compute/region
property. If you use primarily zonal clusters, set the compute/zone
instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location
. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.Sending metrics emitted by Kubernetes control plane components to Cloud Monitoring has the following requirements:
You can enable control plane metrics in an existing GKE cluster using the Google Cloud console, the gcloud CLI or Terraform.
ConsoleYou can enable control plane metrics for a cluster either from the Observability tab for the cluster or from Details tab for the cluster. When you use the Observability tab, you can preview the available charts and metrics before you enable the metric package.
To enable control plane metrics from the Observability tab for the cluster, do the following:
In the Google Cloud console, go to the Kubernetes clusters page:
If you use the search bar to find this page, then select the result whose subheading is Kubernetes Engine.
Click your cluster's name and then select the Observability tab.
Select Control Plane from the list of features.
Click Enable package.
If the control plane metrics are already enabled, then you see a set of charts for control plane metrics instead.
To enable control plane metrics from the Details tab for the cluster, do the following:
In the Google Cloud console, go to the Kubernetes clusters page:
If you use the search bar to find this page, then select the result whose subheading is Kubernetes Engine.
Click your cluster's name.
In the Features row labelled Cloud Monitoring, click the Edit icon.
In the Edit Cloud Monitoring dialog that appears, confirm that Enable Cloud Monitoring is selected.
In the Components drop-down menu, select the control plane components from which you would like to collect metrics: API Server, Scheduler, or Controller Manager.
Click OK.
Click Save Changes.
Update your cluster to collect metrics emitted by the Kubernetes API server, Scheduler, and Controller Manager:
gcloud container clusters update CLUSTER_NAME \
--location=COMPUTE_LOCATION \
--monitoring=SYSTEM,API_SERVER,SCHEDULER,CONTROLLER_MANAGER
Replace the following:
CLUSTER_NAME
: the name of the cluster.COMPUTE_LOCATION
: the Compute Engine location of the cluster.To configure the collection of Kubernetes control plane metrics by using Terraform, see the monitoring_config
block in the Terraform registry for google_container_cluster
. For general information about using Google Cloud with Terraform, see Terraform with Google Cloud.
Control plane metrics consume the "Time series ingestion requests per minute" quota of the Cloud Monitoring API. Before enabling the metrics packages, check your recent peak usage of that quota. If you have many clusters in the same project or are already approaching that quota limit, you can request a quota limit increase before enabling either observability package.
PricingGKE control plane metrics use Google Cloud Managed Service for Prometheus to load metrics into Cloud Monitoring. Cloud Monitoring charges for the ingestion of these metrics are based on the number of samples ingested. However, these metrics are free-of-charge for the registered clusters that belong to a project that has GKE Enterprise edition enabled.
For more information, see Cloud Monitoring pricing.
Metric formatAll Kubernetes Kubernetes control plane metrics written to Cloud Monitoring use the resource type prometheus_target
. Each metric name is prefixed with prometheus.googleapis.com/
and has a suffix indicating the Prometheus metric type, such as /gauge
, /histogram
, or /counter
. Otherwise, each metric name is identical to the metric name exposed by open source Kubernetes.
The Kubernetes control plane metrics can be exported from Cloud Monitoring by using the Cloud Monitoring API. Because all Kubernetes control plane metrics are ingested by using Google Cloud Managed Service for Prometheus, Kubernetes control plane metrics can be queried by using Prometheus Query Language (PromQL). They can also be queried by using by using Monitoring Query Language (MQL).
Querying metricsWhen you query Kubernetes control plane metrics, the name you use depends on whether you are using PromQL or Cloud Monitoring-based features like MQL or the Metrics Explorer menu-driven interface.
The following tables of Kubernetes control plane metrics show two versions of each metric name:
prometheus.googleapis.com/
, which has been omitted from the entries in the table.This section provides a list of the API server metrics and additional information about interpreting and using the metrics.
List of API server metricsWhen API server metrics are enabled, all metrics shown in the following table are exported to Cloud Monitoring in the same project as the GKE cluster.
The Cloud Monitoring metric names in this table must be prefixed with prometheus.googleapis.com/
. That prefix has been omitted from the entries in the table.
apiserver_current_inflight_requests
GA
apiserver_current_inflight_requests/gauge
Gauge
, Double
, 1
request_kind
apiserver_flowcontrol_current_executing_seats
BETA
apiserver_flowcontrol_current_executing_seats/gauge
Gauge
, Double
, 1
flow_schema
priority_level
apiserver_flowcontrol_current_inqueue_requests
BETA
apiserver_flowcontrol_current_inqueue_requests/gauge
Gauge
, Double
, 1
flow_schema
priority_level
apiserver_flowcontrol_nominal_limit_seats
BETA
apiserver_flowcontrol_nominal_limit_seats/gauge
Gauge
, Double
, 1
priority_level
apiserver_flowcontrol_rejected_requests_total
BETA
apiserver_flowcontrol_rejected_requests_total/counter
Cumulative
, Double
, 1
flow_schema
priority_level
reason
apiserver_flowcontrol_request_wait_duration_seconds
BETA
apiserver_flowcontrol_request_wait_duration_seconds/histogram
Cumulative
, Distribution
, s
execute
flow_schema
priority_level
apiserver_request_duration_seconds
GA
apiserver_request_duration_seconds/histogram
Cumulative
, Distribution
, s
component
dry_run
group
resource
scope
subresource
verb
version
apiserver_request_total
GA
apiserver_request_total/counter
Cumulative
, Double
, 1
code
component
dry_run
group
resource
scope
subresource
verb
version
apiserver_response_sizes
GA
apiserver_response_sizes/histogram
Cumulative
, Distribution
, 1
component
group
resource
scope
subresource
verb
version
apiserver_storage_objects
GA
apiserver_storage_objects/gauge
Gauge
, Double
, 1
resource
apiserver_admission_controller_admission_duration_seconds
GA
apiserver_admission_controller_admission_duration_seconds/histogram
Cumulative
, Distribution
, s
name
operation
rejected
type
apiserver_admission_step_admission_duration_seconds
GA
apiserver_admission_step_admission_duration_seconds/histogram
Cumulative
, Distribution
, s
operation
rejected
type
apiserver_admission_webhook_admission_duration_seconds
GA
apiserver_admission_webhook_admission_duration_seconds/histogram
Cumulative
, Distribution
, s
name
operation
rejected
type
This following sections provide additional information about the API server metrics.
apiserver_request_duration_seconds
Use this metric to monitor latency in the API server. The request duration recorded by this metric includes all phases of request processing, from the time the request is received to the time the server completes its response to the client. Specifically, it includes time spent on the following:
resourceVersion
URL parameter) or from the etcd- or Spanner-based cluster state database by calling the etcd
API (for all other requests).group
, version
, resource
, and subresource
labels to uniquely identify a slow request for further investigation.For more information about using this metric, see Latency.
This metric has very high cardinality. When using this metric, you must use filters or grouping to find specific sources of latency.
apiserver_admission_controller_admission_duration_seconds
This metric measures the latency in built-in admission webhooks, not third-party webhooks. To diagnose latency issues with third-party webooks, use the apiserver_admission_webhook_admission_duration_seconds
metric.
apiserver_admission_webhook_admission_duration_seconds
and
apiserver_admission_step_admission_duration_seconds
These metrics measure the latency in external, third-party admission webhooks. The apiserver_admission_webhook_admission_duration_seconds
metric is generally the more useful metric. For more information about using this metric, see Latency.
apiserver_request_total
Use this metric to monitor the request traffic at your API server. You can also use it to determine the success and failure rates of your requests. For more information about using this metric, see Traffic and error rate.
This metric has very high cardinality. When using this metric, you must use filters or grouping to identify sources of errors.
apiserver_storage_objects
Use this metric to detect saturation of your system and to identify possible resource leaks. For more information, see Saturation.
apiserver_current_inflight_requests
This metric records the maximum number of requests that were being actively served in the last one-second window. For more information, see Saturation.
The metric does not include long-running requests like "watch".
Monitoring the API serverThe API server metrics can give you insight into the main signals for system health:
This section describes how to use the API server metrics to monitor the health of your API server.
LatencyWhen the API server is overloaded, request latency increases. To measure the latency of requests to the API server, use the apiserver_request_duration_seconds
metric. To identify the source of latency more specifically, you can group metrics by the verb
or resource
label.
The suggested upper bound for a single-resource call such as GET, POST, or PATCH is one second. The suggested upper bound for both namespace-scoped and cluster-scoped LIST calls is 30 seconds. The upper-bound expectations are set by SLOs that are defined by the open source Kubernetes community. For more information, see API call latency SLIs/SLOs details.
If the value of the apiserver_request_duration_seconds
metric is increasing beyond the expected duration, investigate the following possible causes:
apiserver_request_total
and apiserver_storage_objects
metrics.
code
label to determine whether requests are being processed successfully. For information about the possible values, see HTTP Status codes.group
, version
, resource
, and subresource
labels to uniquely identify a request.A third-party admission webhook is slow or non-responsive. If the value of the apiserver_admission_webhook_admission_duration_seconds
metric is increasing, then some of your third-party or user-defined admission webhooks are slow or non-responsive. Latency in admission webhook can cause delays in job scheduling.
To query the 99th percentile webhook latency per instance of the Kubernetes control plane, use the following PromQL query:
sum by (instance) (histogram_quantile(0.99, rate(apiserver_admission_webhook_admission_duration_seconds_bucket{cluster="CLUSTER_NAME"}[1m])))
We recommend also looking at the 50th, 90th, 95th, and 99.9th percentiles; you can adjust this query by modifying the 0.99
value.
External webhooks have a timeout limit of approximately 10 seconds. You can set alerting policies on the apiserver_admission_webhook_admission_duration_seconds
metric to alert you when you are approaching the webhook timeout.
You can also group the apiserver_admission_webhook_admission_duration_seconds
metric on the name
label to diagnose possible issues with specific webhooks.
You are listing a lot of objects. It is expected that the latency of LIST calls increases as the number of objects of a given type (the response size) increases.
Client-side problems:
For more information, see Good practices for using API Priority and Fairness in the Kubernetes documentation.
Traffic and error rateTo measure the traffic and the number of successful and failed requests at the API server, use the apiserver_request_total
metric. For example, to measure the API server traffic per instance of the Kubernetes control plane, use the following PromQL query:
sum by (instance) (increase(apiserver_request_total{cluster="CLUSTER_NAME"}[1m]))
To query the unsuccessful requests, filter the code
label for 4xx and 5xx values by using the following PromQL query:
sum(rate(apiserver_request_total{code=~"[45].."}[5m]))
To query the successful requests, filter the code
label for 2xx values by using the following PromQL query:
sum(rate(apiserver_request_total{code=~"2.."}[5m]))
To query the rejected requests by the API server per instance of the Kubernetes control plane, filter the code
label for the value 429 (http.StatusTooManyRequests
) by using the following PromQL query:
sum by (instance) (increase(apiserver_request_total{cluster="CLUSTER_NAME", code="429"}[1m]))
You can measure the saturation in your system by using the apiserver_current_inflight_requests
and apiserver_storage_objects
metrics.
If the value of the apiserver_storage_objects
metric is increasing, you might be experiencing a problem with a custom controller that creates objects but doesn't delete them. You can filter or group the metric by the resource
label to identify the resource experiencing/ the increase.
Evaluate the apiserver_current_inflight_requests
metric in accordance with your API Priority and Fairness settings; these settings affect how requests are prioritized, so you can't draw conclusions from the metric values alone. For more information, see API Priority and Fairness.
This section provides a list of the scheduler metrics and additional information about interpreting and using the metrics.
List of scheduler metricsWhen scheduler metrics are enabled, all metrics shown in the following table are exported to Cloud Monitoring in the same project as the GKE cluster.
The Cloud Monitoring metric names in this table must be prefixed with prometheus.googleapis.com/
. That prefix has been omitted from the entries in the table.
kube_pod_resource_limit
GA
kube_pod_resource_limit/gauge
Gauge
, Double
, 1
namespace
node
pod
priority
resource
scheduler
unit
kube_pod_resource_request
GA
kube_pod_resource_request/gauge
Gauge
, Double
, 1
namespace
node
pod
priority
resource
scheduler
unit
scheduler_pending_pods
GA
scheduler_pending_pods/gauge
Gauge
, Double
, 1
queue
scheduler_pod_scheduling_duration_seconds
DEPRECATED
scheduler_pod_scheduling_duration_seconds/histogram
Cumulative
, Distribution
, 1
scheduler_pod_scheduling_sli_duration_seconds
.] E2e latency for a pod being scheduled which may include multiple scheduling attempts.
attempts
scheduler_pod_scheduling_sli_duration_seconds
BETA
scheduler_pod_scheduling_sli_duration_seconds/histogram
Cumulative
, Distribution
, 1
attempts
scheduler_preemption_attempts_total
GA
scheduler_preemption_attempts_total/counter
Cumulative
, Double
, 1
scheduler_preemption_victims
GA
scheduler_preemption_victims/histogram
Cumulative
, Distribution
, 1
scheduler_scheduling_attempt_duration_seconds
GA
scheduler_scheduling_attempt_duration_seconds/histogram
Cumulative
, Distribution
, 1
profile
result
scheduler_schedule_attempts_total
GA
scheduler_schedule_attempts_total/counter
Cumulative
, Double
, 1
profile
result
This following sections provide additional information about the API server metrics.
scheduler_pending_pods
You can use the scheduler_pending_pods
metric to monitor the load on your scheduler. Increasing values in this metric can indicate resourcing problems. The scheduler has three queues, and this metric reports the number of pending requests by queue. The following queues are supported:
active
queue
backoff
queue
active
queue for another scheduling attempt. For more information on the management of the backoff
queue, see the implementation request, Kubernetes issue 75417.unschedulable
set
The set of pods that the scheduler attempted to schedule but which have been determined to be unschedulable. Placement on this queue might indicate readiness or compatibility issues with your nodes or the configuration of your node selectors.
When resource constraints prevent pods from being scheduled, the pods are not subject to back-off handling. Instead, when a cluster is full, new pods fail to be scheduled and are put on the unscheduled
queue.
The presence of unscheduled pods might indicate that you have insufficient resources or that you have a node-configuration problem. Pods are moved to either the backoff
or active
queue after events that change the cluster state. Pods on this queue indicate that nothing has changed in the cluster that would make the pods schedulable.
Affinities define rules for how pods are assigned to nodes. The use of affinity or anti-affinity rules can be a reason for an increase in unscheduled pods.
Some events, for example, PVC/Service ADD/UPDATE, termination of a pod, or the registration of new nodes, move some or all unscheduled pods to either the backoff
or active
queue. For more information, see Kubernetes issue 81214.
For more information, see Scheduler latency and Resource issues.
scheduler_scheduling_attempt_duration_seconds
This metric measures the duration of a single scheduling attempt within the scheduler itself and is broken down by the result: scheduled, unschedulable, or error. The duration runs from the time the scheduler picks up a pod until the time the scheduler locates a node and places the pod on the node, determines that the pod is unschedulable, or encounters an error. The scheduling duration includes the time in the scheduling process as well as the binding time. Binding is the process in which the scheduler communicates its node assignment to the API server. For more information, see Scheduler latency.
This metric doesn't capture the time the pod spends in admission control or validation.
For more information about scheduling, see Scheduling a Pod.
scheduler_schedule_attempts_total
This metric measures the number of scheduling attempts; each attempt to schedule a pod increases the value. You can use this metric to determine if the scheduler is available: if the value is increasing, then the scheduler is operational. You can use the result
label to determine the success; pods are either scheduled
or unschedulable
.
This metric correlates strongly with the scheduler_pending_pods
metric: when there are many pending pods, you can expect to see many attempts to schedule the pods. For more information, see Resource issues.
This metric doesn't increase if the scheduler has no pods to schedule, which can be the case if you have a custom secondary scheduler.
scheduler_preemption_attempts_total
and scheduler_preemptions_victims
You can use preemption metrics to help determine if you need to add resources.
You might have higher-priority pods that can't be scheduled because there is no room for them. In this case, the scheduler frees up resources by preempting one or more running pods on a node. The scheduler_preemption_attempts_total
metric tracks the number of times the scheduler has tried to preempt pods.
The scheduler_preemptions_victims
metric counts the pods selected for preemption.
The number of preemption attempts correlates strongly with the value of the scheduler_schedule_attempts_total
metric when the value of the result
label is unschedulable
. The two values aren't equivalent: for example, if a cluster has 0 nodes, there are no preemption attempts but there might be scheduling attempts that fail.
For more information, see Resource issues.
Monitoring the schedulerThe scheduler metrics can give you insight into the performance of your scheduler:
This section describes how to use the scheduler metric to monitor your scheduler.
Scheduler latencyThe scheduler's task is to ensure that your pods run, so you want to know when the scheduler is stuck or running slowly.
scheduler_schedule_attempts_total
metric.When the scheduler is running slowly, investigate the following possible causes:
The number of pending pods is increasing. Use the scheduler_pending_pods
metric to monitor the number of pending pods. The following PromQL query returns the number of pending pods per queue in a cluster:
sum by (queue) (delta(scheduler_pending_pods{cluster="CLUSTER_NAME"}[2m]))
Individual attempts to schedule pods are slow. Use the scheduler_scheduling_attempt_duration_seconds
metric to monitor the latency of scheduling attempts.
We recommend observing this metric at least at the 50th and 95th percentiles. The following PromQL query retrieves 95th percentile values but can be adjusted:
sum by (instance) (histogram_quantile(0.95, rate( scheduler_scheduling_attempt_duration_seconds_bucket{cluster="CLUSTER_NAME"}[5m])))
The scheduler metrics can also help you assess whether you have sufficient resources. If the value of the scheduler_preemption_attempts_total
metric is increasing, then check the value of scheduler_preemption_victims
by using the following PromQL query:
scheduler_preemption_victims_sum{cluster="CLUSTER_NAME"}
The number of preemption attempts and the number of preemption victims both increase when there are higher priority pods to schedule. The preemption metrics don't tell you whether the high-priority pods that triggered the preemptions were scheduled, so when you see increases in the value of the preemption metrics, you can also monitor the value of the scheduler_pending_pods
metric. If the number of pending pods is also increasing, then you might not have sufficient resources to handle the higher-priority pods; you might need to scale up the available resources, create new pods with reduced resource claims, or change the node selector.
If the number of preemption victims is not increasing, then there are no remaining pods with low priority that can be removed. In this case, consider adding more nodes so the new pods can be allocated.
If the number of preemption victims is increasing, then there are higher-priority pods waiting to be scheduled, so the scheduler is preempting some of the running pods. The preemption metrics don't tell you whether the higher priority pods have been scheduled successfully.
To determine if the higher-priority pods are being scheduled, look for decreasing values of the scheduler_pending_pods
metric. If the value of this metric is increasing, then you might need to add more nodes.
You can expect to see temporary spikes in the values for the scheduler_pending_pods
metric when workloads are going to be scheduled in your cluster, for example, during events like updates or scalings. If you have sufficient resources in your cluster, these spikes are temporary. If the number of pending pods doesn't go down, do the following:
If pods can't be scheduled because of insufficient resources, then consider freeing up some of the existing nodes or increasing the number of nodes.
Controller Manager metricsWhen controller manager metrics are enabled, all metrics shown in the following table are exported to Cloud Monitoring in the same project as the GKE cluster.
The Cloud Monitoring metric names in this table must be prefixed with prometheus.googleapis.com/
. That prefix has been omitted from the entries in the table.
node_collector_evictions_total
GA
node_collector_evictions_total/counter
Cumulative
, Double
, 1
zone
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4