A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://run-ai-docs.nvidia.com/self-hosted/infrastructure-setup/procedures/system-monitoring below:

NVIDIA Run:ai System Monitoring | Run:ai Documentation

NVIDIA Run:ai System Monitoring | Run:ai Documentation
  1. Infrastructure setup
  2. Infrastructure Procedures
NVIDIA Run:ai System Monitoring

This section explains how to configure NVIDIA Run:ai to generate health alerts and to connect these alerts to alert-management systems within your organization. Alerts are generated for NVIDIA Run:ai clusters.

NVIDIA Run:ai uses Prometheus for externalizing metrics and providing visibility to end-users. The NVIDIA Run:ai Cluster installation includes Prometheus or can connect to an existing Prometheus instance used in your organization. The alerts are based on the Prometheus AlertManager. Once installed, it is enabled by default.

This document explains how to:

Use the steps below to set up monitoring alerts.

Validating Prometheus Operator Installed
  1. Verify that the Prometheus Operator Deployment is running. Copy the following command and paste it in your terminal, where you have access to the Kubernetes cluster:

kubectl get deployment kube-prometheus-stack-operator -n monitoring

In your terminal, you can see an output indicating the deployment's status, including the number of replicas and their current state.

  1. Verify that Prometheus instances are running. Copy the following command and paste it in your terminal:

kubectl get prometheus -n runai

You can see the Prometheus instance(s) listed along with their status.

Enabling Prometheus AlertManager

In each of the steps in this section, copy the content of the code snippet to a new YAML file (e.g., step1.yaml).

  1. Copy the following command to your terminal, to apply the YAML file to the cluster:

kubectl apply -f step1.yaml 
  1. Copy the following command to your terminal to create the AlertManager CustomResource, to enable AlertManager:

apiVersion: monitoring.coreos.com/v1  
kind: Alertmanager  
metadata:  
   name: runai  
   namespace: runai  
spec:  
   replicas: 1  
   alertmanagerConfigSelector:  
      matchLabels:
         alertmanagerConfig: runai 
  1. Copy the following command to your terminal to validate that the AlertManager instance has started:

kubectl get alertmanager -n runai
  1. Copy the following command to your terminal to validate that the Prometheus operator has created a Service for AlertManager:

kubectl get svc alertmanager-operated -n runai
Configuring Prometheus to Send Alerts
  1. Open the terminal on your local machine or another machine that has access to your Kubernetes cluster

  2. Copy and paste the following command in your terminal to edit the Prometheus configuration for the runai Namespace:

kubectl edit prometheus runai -n runai

This command opens the Prometheus configuration file in your default text editor (usually vi or nano).

  1. Copy and paste the following text to your terminal to change the configuration file:

alerting:  
   alertmanagers:  
      - namespace: runai  
        name: alertmanager-operated  
        port: web
  1. Save the changes and exit the text editor.

Note

To save changes using vi, type :wq and press Enter. The changes are applied to the Prometheus configuration in the cluster.

Set out below are the various alert destinations.

Configuring AlertManager for Custom Email Alerts

In each step, copy the contents of the code snippets to a new file and apply it to the cluster using kubectl apply -f.

  1. Add your smtp password as a secret:

apiVersion: v1  
kind: Secret  
metadata:  
   name: alertmanager-smtp-password  
   namespace: runai  
stringData:
   password: "your_smtp_password"
  1. Replace the relevant smtp details with your own, then apply the alertmanagerconfig using kubectl apply.

 apiVersion: monitoring.coreos.com/v1alpha1  
 kind: AlertmanagerConfig  
 metadata:  
   name: runai  
   namespace: runai  
 labels:  
    alertmanagerConfig: runai  
 spec:  
    route:  
       continue: true  
       groupBy:   
       - alertname

       groupWait: 30s  
       groupInterval: 5m  
       repeatInterval: 1h

    matchers:  
    - matchType: =~  
      name: alertname  
      value: Runai.*

    receiver: email

 receivers:  
 - name: 'email'  
   emailConfigs:  
   - to: '<destination_email_address>'  
     from: '<from_email_address>'  
     smarthost: 'smtp.gmail.com:587'  
     authUsername: '<smtp_server_user_name>'  
     authPassword:  
       name: alertmanager-smtp-password
         key: password  
  1. Save and exit the editor. The configuration is automatically reloaded.

Third Party Alert Destinations

Prometheus AlertManager provides a structured way to connect to alert-management systems. There are built-in plugins for popular systems such as PagerDuty and OpsGenie, including a generic Webhook.

Example: Integrating NVIDIA Run:ai With a Webhook
  1. Use the upgrade cluster instructions to modify the values file: Edit the values file to add the following, and replace <WEB-HOOK-URL> with the URL from webhook.site :

codekube-prometheus-stack:  
  ...  
  alertmanager:  
    enabled: true  
    config:  
      global:  
        resolve_timeout: 5m  
      receivers:  
      - name: "null"  
      - name: webhook-notifications  
        webhook_configs:  
          - url: <WEB-HOOK-URL>  
            send_resolved: true  
      route:  
        group_by:  
        - alertname  
        group_interval: 5m  
        group_wait: 30s  
        receiver: 'null'  
        repeat_interval: 10m  
        routes:  
        - receiver: webhook-notifications
  1. Verify that you are receiving alerts on the webhook.site , in the left pane:

A NVIDIA Run:ai cluster comes with several built-in alerts. Each alert notifies on a specific functionality of a NVIDIA Run:ai’s entity. There is also a single, inclusive alert: NVIDIA Run:ai Critical Problems, which aggregates all component-based alerts into a single cluster health test.

NVIDIA Run:ai Agent Cluster Info Push Rate Low

The cluster-sync Pod in the runai namespace might not be functioning properly

Possible impact - no info/partial info from the cluster is being synced back to the control-plane

kubectl get pod -n runai to see if the cluster-sync pod is running

Troubleshooting/Mitigation

To diagnose issues with the cluster-sync pod, follow these steps:

  1. Paste the following command to your terminal, to receive detailed information about the cluster-sync deployment:kubectl describe deployment cluster-sync -n runai

  2. Check the Logs: Use the following command to view the logs of the cluster-sync deployment:kubectl logs deployment/cluster-sync -n runai

  3. Analyze the Logs and Pod Details: From the information provided by the logs and the deployment details, attempt to identify the reason why the cluster-sync pod is not functioning correctly

  4. Check Connectivity: Ensure there is a stable network connection between the cluster and the NVIDIA Run:ai Control Plane. A connectivity issue may be the root cause of the problem.

  5. Contact Support: If the network connection is stable and you are still unable to resolve the issue, contact NVIDIA Run:ai support for further assistance

NVIDIA Run:ai Agent Pull Rate Low

The runai-agent pod may be too loaded, is slow in processing data (possible in very big clusters), or the runai-agent pod itself in the runai namespace may not be functioning properly.

Possible impact - no info/partial info from the control-plane is being synced in the cluster

Run: kubectl get pod -n runai And see if the runai-agent pod is running.

Troubleshooting/Mitigation

To diagnose issues with the runai-agent pod, follow these steps:

  1. Describe the Deployment: Run the following command to get detailed information about the runai-agent deployment:kubectl describe deployment runai-agent -n runai

  2. Check the Logs: Use the following command to view the logs of the runai-agent deployment:kubectl logs deployment/runai-agent -n runai

  3. Analyze the Logs and Pod Details: From the information provided by the logs and the deployment details, attempt to identify the reason why the runai-agent pod is not functioning correctly. There may be a connectivity issue with the control plane.

  4. Check Connectivity: Ensure there is a stable network connection between the runai-agent and the control plane. A connectivity issue may be the root cause of the problem.

  5. Consider Cluster Load: If the runai-agent appears to be functioning properly but the cluster is very large and heavily loaded, it may take more time for the agent to process data from the control plane.

  6. Adjust Alert Threshold: If the cluster load is causing the alert to fire, you can adjust the threshold at which the alert triggers. The default value is 0.05. You can try changing it to a lower value (e.g., 0.045 or 0.04). To edit the value, paste the following in your terminal:kubectl edit runaiconfig -n runai/. In the editor, navigate to: spec: prometheus: agentPullPushRateMinForAlert . If the agentPullPushRateMinForAlert value does not exist, add it under spec -> prometheus .

NVIDIA Run:ai Container Memory Usage Critical

Runai container is using more than 90% of its Memory limit

The container might run out of memory and crash.

Calculate the memory usage, this is performed by pasting the following to your terminal: container_memory_usage_bytes{namespace=~"runai

Troubleshooting/Mitigation

Add more memory resources to the container. If the issue persists, contact NVIDIA Run:ai

NVIDIA Run:ai Container Memory Usage Warning

Runai container is using more than 80% of its memory limit

The container might run out of memory and crash

Calculate the memory usage, this can be done by pasting the following to your terminal: container_memory_usage_bytes{namespace=~"runai

Troubleshooting/Mitigation

Add more memory resources to the container. If the issue persists, contact NVIDIA Run:ai

NVIDIA Run:ai Container Restarting

Runai container has restarted more than twice in the last 10 min

The container might become unavailable and impact the NVIDIA Run:ai system

To diagnose the issue and identify the problematic pods, paste this into your terminal: kubectl get pods -n runai kubectl get pods -n runai-backendOne or more of the pods have a restart count >= 2.

Troubleshooting/Mitigation

Paste this into your terminal:kubectl logs -n NAMESPACE POD_NAMEReplace NAMESPACE and POD_NAME with the relevant pod information from the previous step. Check the logs for any standout issues and verify that the container has sufficient resources. If you need further assistance, contact NVIDIA Run:ai

NVIDIA Run:ai CPU Usage Warning

runai container is using more than 80% of its CPU limit

This might cause slowness in the operation of certain NVIDIA Run:ai features.

Paste the following query to your terminal in order to calculate the CPU usage: rate(container_cpu_usage_seconds_total{namespace=~"runai

Troubleshooting/Mitigation

Add more CPU resources to the container. If the issue persists, please contact NVIDIA Run:ai.

NVIDIA Run:ai Critical Problem

One of the critical NVIDIA Run:ai alerts is currently active

Impact is based on the active alert

Check NVIDIA Run:ai alerts in Prometheus to identify any active critical alerts

Unknown State Alert for a Node

The Kubernetes node hosting GPU workloads is in an unknown state, and its health and readiness cannot be determined.

This may interrupt GPU workload scheduling and execution.

Critical - Node is either unschedulable or has unknown status. The node is in one of the following states:

Check the node's status using kubectl describe node, verify Kubernetes API server connectivity, and inspect system logs for GPU-specific or node-level errors.

The Kubernetes node hosting GPU workloads has insufficient memory to support current or upcoming workloads.

GPU workloads may fail to schedule, experience degraded performance, or crash due to memory shortages, disrupting dependent applications.

Critical - Node is using more than 90% of its memory. Warning - Node is using more than 80% of its memory.

Use kubectl top node to assess memory usage, identify memory-intensive pods, consider resizing the node or optimizing memory usage in affected pods.

NVIDIA Run:ai DaemonSet Rollout Stuck / DaemonSet Unavailable on Nodes

There are currently 0 available pods for the runai daemonset on the relevant node

No fractional GPU workloads support

Paste the following command to your terminal: kubectl get daemonset -n runai-backend In the result of this command, identify the daemonset(s) that don’t have any running pods

Troubleshooting/Mitigation

Paste the following command to your terminal, where daemonsetX is the problematic daemonset from the pervious step: kubectl describe daemonsetX -n runai on the relevant deamonset(s) from the previous step. The next step is to look for the specific error which prevents it from creating pods. Possible reasons might be:

NVIDIA Run:ai Deployment Insufficient Replicas /Deployment No Available Replicas /Deployment Unavailable Replicas

Runai deployment has one or more unavailable pods

When this happens, there may be scale issues. Additionally, new versions cannot be deployed, potentially resulting in missing features.

Paste the following commands to your terminal, in order to get the status of the deployments in the runai and runai-backend namespaces:kubectl get deployment -n runai kubectl get deployment -n runai-backendIdentify any deployments that have missing pods. Look for discrepancies in the DESIRED and AVAILABLE columns. If the number of AVAILABLE pods is less than the DESIRED pods, it indicates that there are missing pods.

Troubleshooting/Mitigation

NVIDIA Run:ai Project Controller Reconcile Failure

The project-controller in runai namespace had errors while reconciling projects

Some projects might not be in the “Ready” state. This means that they are not fully operational and may not have all the necessary components running or configured correctly.

Retrieve the logs for the project-controller deployment by pasting the following command in your terminal:kubectl logs deployment/project-controller -n runai Carefully examine the logs for any errors or warning messages. These logs help you understand what might be going wrong with the project controller.

Troubleshooting/Mitigation

Once errors in the log have been identified, follow these steps to mitigate the issue: The error messages in the logs should provide detailed information about the problem.

  1. Read through them to understand the nature of the issue. If the logs indicate which project failed to reconcile, you can further investigate by checking the status of that specific project.

  2. Run the following command, replacing <PROJECT_NAME> with the name of the problematic project:kubectl get project <PROJECT_NAME> -o yaml

  3. Review the status section in the YAML output. This section describes the current state of the project and provide insights into what might be causing the failure. If the issue persists, contact NVIDIA Run:ai.

NVIDIA Run:ai StatefulSet Insufficient Replicas / StatefulSet No Available Replicas

Runai statefulset has no available pods

Absence of Metrics Database Unavailability

To diagnose the issue, follow these steps:

  1. Check the status of the stateful sets in the runai-backend namespace by running the following command:kubectl get statefulset -n runai-backend

  2. Identify any stateful sets that have no running pods. These are the ones that might be causing the problem.

Troubleshooting/Mitigation

Once you've identified the problematic stateful sets, follow these steps to mitigate the issue:

  1. Describe the stateful set to get detailed information on why it cannot create pods. Replace X with the name of the stateful set:kubectl describe statefulset X -n runai-backend

  2. Review the description output to understand the root cause of the issue. Look for events or error messages that explain why the pods are not being created.

  3. If you're unable to resolve the issue based on the information gathered, contact NVIDIA Run:ai support for further assistance.

You can add additional alerts from NVIDIA Run:ai. Alerts are triggered by using the Prometheus query language with any NVIDIA Run:ai metric.

To create an alert, follow these steps using Prometheus query language with NVIDIA Run:ai Metrics:

kube-prometheus-stack:  
   additionalPrometheusRulesMap:  
     custom-runai:  
       groups:  
       - name: custom-runai-rules  
         rules:  
         - alert: <ALERT-NAME>  
           annotations:  
             summary: <ALERT-SUMMARY-TEXT>  
           expr:  <PROMQL-EXPRESSION>  
           for: <optional: duration s/m/h>  
           labels:  
             severity: <critical/warning>

You can find an example in the Prometheus documentation .


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4