A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/tutorials/custom-metrics-autoscaling below:

Optimize Pod autoscaling based on metrics | Kubernetes Engine

Deploying the Custom Metrics Adapter

The Custom Metrics Adapter lets your cluster send and receive metrics with Cloud Monitoring.

Pub/Sub

The procedure to install the Custom Metrics Adapter differs for clusters with or without Workload Identity Federation for GKE enabled. Select the option matching the setup you chose when you created your cluster.

Workload Identity

Grant your user the ability to create required authorization roles:

kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole cluster-admin --user "$(gcloud config get-value account)"

Deploy the custom metrics adapter on your cluster:

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml

The adapter uses the custom-metrics-stackdriver-adapter Kubernetes service account in the custom-metrics namespace. Allow this service account to read Cloud Monitoring metrics by assigning the Monitoring Viewer role:

gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
  --role roles/monitoring.viewer \
  --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/custom-metrics/sa/custom-metrics-stackdriver-adapter
Legacy Authentication

Grant your user the ability to create required authorization roles:

kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole cluster-admin --user "$(gcloud config get-value account)"

Deploy the custom metrics adapter on your cluster:

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml
Custom Metric

The procedure to install the Custom Metrics Adapter differs for clusters with or without Workload Identity Federation for GKE enabled. Select the option matching the setup you chose when you created your cluster.

Workload Identity

Grant your user the ability to create required authorization roles:

kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole cluster-admin --user "$(gcloud config get-value account)"

Deploy the custom metrics adapter on your cluster:

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml

The adapter uses the custom-metrics-stackdriver-adapter Kubernetes service account in the custom-metrics namespace. Allow this service account to read Cloud Monitoring metrics by assigning the Monitoring Viewer role:

gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
  --role roles/monitoring.viewer \
  --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/custom-metrics/sa/custom-metrics-stackdriver-adapter
Legacy Authentication

Grant your user the ability to create required authorization roles:

kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole cluster-admin --user "$(gcloud config get-value account)"

Deploy the custom metrics adapter on your cluster:

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml
Deploying an application with metrics

Download the repository containing the application code for this tutorial:

Pub/Sub
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples.git
cd kubernetes-engine-samples/databases/cloud-pubsub
Custom Metric
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples.git
cd kubernetes-engine-samples/observability/custom-metrics-autoscaling/google-managed-prometheus

The repository contains code that exports metrics to Cloud Monitoring:

Pub/Sub

This application polls a Pub/Sub subscription for new messages, acknowledging them as they arrive. Pub/Sub subscription metrics are automatically collected by Cloud Monitoring.

Custom Metric

This application responds to any web request to the /metrics path with a constant value metric using the Prometheus format.

The repository also contains a Kubernetes manifest to deploy the application to your cluster. A Deployment is a Kubernetes API object that lets you run multiple replicas of Pods that are distributed among the nodes in a cluster:

Pub/Sub

The manifest differs for clusters with or without Workload Identity Federation for GKE enabled. Select the option matching the setup chose when you created your cluster.

Workload Identity Legacy authentication Custom Metric

With the PodMonitoring resource, the Google Cloud Managed Service for Prometheus exports the Prometheus metrics to Cloud Monitoring:

Starting in GKE Standard version 1.27 or GKE Autopilot version 1.25, Google Cloud Managed Service for Prometheus is enabled. To enable Google Cloud Managed Service for Prometheus in clusters in earlier versions, see Enable managed collection.

Deploy the application to your cluster:

Pub/Sub

The procedure to deploy your application differs for clusters with or without Workload Identity Federation for GKE enabled. Select the option matching the setup you chose when you created your cluster.

Workload Identity
  1. Enable the Pub/Sub API on your project:

    gcloud services enable cloudresourcemanager.googleapis.com pubsub.googleapis.com
    
  2. Create a Pub/Sub topic and subscription:

    gcloud pubsub topics create echo
    gcloud pubsub subscriptions create echo-read --topic=echo
    
  3. Deploy the application to your cluster:

    kubectl apply -f deployment/pubsub-with-workload-identity.yaml
    
  4. This application defines a pubsub-sa Kubernetes service account. Assign it the Pub/Sub subscriber role so that the application can publish messages to the Pub/Sub topic.

    gcloud projects add-iam-policy-binding projects/$PROJECT_ID \
      --role=roles/pubsub.subscriber \
      --member=principal://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$PROJECT_ID.svc.id.goog/subject/ns/default/sa/pubsub-sa
    

    The preceding command uses a Principal Identifier, which allows IAM to directly refer to a Kubernetes service account.

    Best practice:

    Use Principal identifiers, but consider the limitation in the description of an alternative method.

Legacy authentication
  1. Enable the Pub/Sub API on your project:

    gcloud services enable cloudresourcemanager.googleapis.com pubsub.googleapis.com
    
  2. Create a Pub/Sub topic and subscription:

    gcloud pubsub topics create echo
    gcloud pubsub subscriptions create echo-read --topic=echo
    
  3. Create a service account with access to Pub/Sub:

    gcloud iam service-accounts create autoscaling-pubsub-sa
    gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member "serviceAccount:autoscaling-pubsub-sa@$PROJECT_ID.iam.gserviceaccount.com" \
      --role "roles/pubsub.subscriber"
    
  4. Download the service account key file:

    gcloud iam service-accounts keys create key.json \
      --iam-account autoscaling-pubsub-sa@$PROJECT_ID.iam.gserviceaccount.com
    
  5. Import the service account key to your cluster as a Secret:

    kubectl create secret generic pubsub-key --from-file=key.json=./key.json
    
  6. Deploy the application to your cluster:

    kubectl apply -f deployment/pubsub-with-secret.yaml
    
Custom Metric
kubectl apply -f custom-metrics-gmp.yaml

After waiting a moment for the application to deploy, all Pods reach the Ready state:

Pub/Sub
kubectl get pods

Output:

NAME                     READY   STATUS    RESTARTS   AGE
pubsub-8cd995d7c-bdhqz   1/1     Running   0          58s
Custom Metric
kubectl get pods

Output:

NAME                                  READY   STATUS    RESTARTS   AGE
custom-metrics-gmp-865dffdff9-x2cg9   1/1     Running   0          49s
Viewing metrics on Cloud Monitoring

As your application runs, it writes your metrics to Cloud Monitoring.

To view the metrics for a monitored resource by using the Metrics Explorer, do the following:

  1. In the Google Cloud console, go to the leaderboard Metrics explorer page:

    Go to Metrics explorer

    If you use the search bar to find this page, then select the result whose subheading is Monitoring.

  2. In the Metric element, expand the Select a metric menu, and then select a resource type and metric type. For example, to chart the CPU utilization of a virtual machine, do the following:
    1. (Optional) To reduce the menu's options, enter part of the metric name in the Filter bar. For this example, enter utilization.
    2. In the Active resources menu, select VM instance.
    3. In the Active metric categories menu, select Instance.
    4. In the Active metrics menu, select CPU utilization and then click Apply.
  3. To filter which time series are displayed, use the Filter element.

  4. To combine time series, use the menus on the Aggregation element. For example, to display the CPU utilization for your VMs, based on their zone, set the first menu to Mean and the second menu to zone.

    All time series are displayed when the first menu of the Aggregation element is set to Unaggregated. The default settings for the Aggregation element are determined by the metric type you selected.

The resource type and metrics are the following:

Pub/Sub

Metrics Explorer

Resource type: pubsub_subscription

Metric: pubsub.googleapis.com/subscription/num_undelivered_messages

Custom Metric

Metrics Explorer

Resource type: prometheus_target

Metric: prometheus.googleapis.com/custom_prometheus/gauge

Depending on the metric, you might not see much activity on the Cloud Monitoring Metrics Explorer yet. Don't be surprised if your metric isn't updating.

Creating a HorizontalPodAutoscaler object

When you see your metric in Cloud Monitoring, you can deploy a HorizontalPodAutoscaler to resize your Deployment based on your metric.

Deploy the HorizontalPodAutoscaler to your cluster:

Pub/Sub
kubectl apply -f deployment/pubsub-hpa.yaml
Custom Metric
kubectl apply -f custom-metrics-gmp-hpa.yaml
Generating load

For some metrics, you might need to generate load to watch the autoscaling:

Pub/Sub

Publish 200 messages to the Pub/Sub topic:

for i in {1..200}; do gcloud pubsub topics publish echo --message="Autoscaling #${i}"; done
Custom Metric

Not Applicable: The code used in this sample exports a constant value of 40 for the custom metric. The HorizontalPodAutoscaler is set with a target value of 20, so it attempts to scale up the Deployment automatically.

You might need to wait a couple minutes for the HorizontalPodAutoscaler to respond to the metric changes.

Observing HorizontalPodAutoscaler scaling up

You can check the current number of replicas of your Deployment by running:

kubectl get deployments

After giving some time for the metric to propagate, the Deployment creates five Pods to handle the backlog.

You can also inspect the state and recent activity of the HorizontalPodAutoscaler by running:

kubectl describe hpa

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4