This is a cache of https://docs.openshift.com/container-platform/4.12/observability/monitoring/nvidia-gpu-admin-dashboard.html. It is a snapshot of the page at 2024-11-22T12:23:01.488+0000.
The NVIDIA GPU administration dashboard - Monitoring | Observability | OpenShift Container Platform 4.12
×

Introduction

The OpenShift Console NVIDIA GPU plugin is a dedicated administration dashboard for NVIDIA GPU usage visualization in the OpenShift Container Platform (OCP) Console. The visualizations in the administration dashboard provide guidance on how to best optimize GPU resources in clusters, such as when a GPU is under- or over-utilized.

The OpenShift Console NVIDIA GPU plugin works as a remote bundle for the OCP console. To run the plugin the OCP console must be running.

Installing the NVIDIA GPU administration dashboard

Install the NVIDIA GPU plugin by using Helm on the OpenShift Container Platform (OCP) Console to add GPU capabilities.

The OpenShift Console NVIDIA GPU plugin works as a remote bundle for the OCP console. To run the OpenShift Console NVIDIA GPU plugin an instance of the OCP console must be running.

Prerequisites
  • Red Hat OpenShift 4.11+

  • NVIDIA GPU operator

  • Helm

Procedure

Use the following procedure to install the OpenShift Console NVIDIA GPU plugin.

  1. Add the Helm repository:

    $ helm repo add rh-ecosystem-edge https://rh-ecosystem-edge.github.io/console-plugin-nvidia-gpu
    $ helm repo update
  2. Install the Helm chart in the default NVIDIA GPU operator namespace:

    $ helm install -n nvidia-gpu-operator console-plugin-nvidia-gpu rh-ecosystem-edge/console-plugin-nvidia-gpu
    Example output
    NAME: console-plugin-nvidia-gpu
    LAST DEPLOYED: Tue Aug 23 15:37:35 2022
    NAMESPACE: nvidia-gpu-operator
    STATUS: deployed
    REVISION: 1
    NOTES:
    View the Console Plugin NVIDIA GPU deployed resources by running the following command:
    
    $ oc -n {{ .Release.Namespace }} get all -l app.kubernetes.io/name=console-plugin-nvidia-gpu
    
    Enable the plugin by running the following command:
    
    # Check if a plugins field is specified
    $ oc get consoles.operator.openshift.io cluster --output=jsonpath="{.spec.plugins}"
    
    # if not, then run the following command to enable the plugin
    $ oc patch consoles.operator.openshift.io cluster --patch '{ "spec": { "plugins": ["console-plugin-nvidia-gpu"] } }' --type=merge
    
    # if yes, then run the following command to enable the plugin
    $ oc patch consoles.operator.openshift.io cluster --patch '[{"op": "add", "path": "/spec/plugins/-", "value": "console-plugin-nvidia-gpu" }]' --type=json
    
    # add the required DCGM Exporter metrics configmap to the existing NVIDIA operator ClusterPolicy CR:
    oc patch clusterpolicies.nvidia.com gpu-cluster-policy --patch '{ "spec": { "dcgmExporter": { "config": { "name": "console-plugin-nvidia-gpu" } } } }' --type=merge
    

    The dashboard relies mostly on Prometheus metrics exposed by the NVIDIA DCGM Exporter, but the default exposed metrics are not enough for the dashboard to render the required gauges. Therefore, the DGCM exporter is configured to expose a custom set of metrics, as shown here.

    apiVersion: v1
    data:
      dcgm-metrics.csv: |
        DCGM_FI_PROF_GR_ENGINE_ACTIVE, gauge, gpu utilization.
        DCGM_FI_DEV_MEM_COPY_UTIL, gauge, mem utilization.
        DCGM_FI_DEV_ENC_UTIL, gauge, enc utilization.
        DCGM_FI_DEV_DEC_UTIL, gauge, dec utilization.
        DCGM_FI_DEV_POWER_USAGE, gauge, power usage.
        DCGM_FI_DEV_POWER_MGMT_LIMIT_MAX, gauge, power mgmt limit.
        DCGM_FI_DEV_GPU_TEMP, gauge, gpu temp.
        DCGM_FI_DEV_SM_CLOCK, gauge, sm clock.
        DCGM_FI_DEV_MAX_SM_CLOCK, gauge, max sm clock.
        DCGM_FI_DEV_MEM_CLOCK, gauge, mem clock.
        DCGM_FI_DEV_MAX_MEM_CLOCK, gauge, max mem clock.
    kind: configmap
    metadata:
      annotations:
        meta.helm.sh/release-name: console-plugin-nvidia-gpu
        meta.helm.sh/release-namespace: nvidia-gpu-operator
      creationTimestamp: "2022-10-26T19:46:41Z"
      labels:
        app.kubernetes.io/component: console-plugin-nvidia-gpu
        app.kubernetes.io/instance: console-plugin-nvidia-gpu
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: console-plugin-nvidia-gpu
        app.kubernetes.io/part-of: console-plugin-nvidia-gpu
        app.kubernetes.io/version: latest
        helm.sh/chart: console-plugin-nvidia-gpu-0.2.3
      name: console-plugin-nvidia-gpu
      namespace: nvidia-gpu-operator
      resourceVersion: "19096623"
      uid: 96cdf700-dd27-437b-897d-5cbb1c255068

    Install the configmap and edit the NVIDIA Operator ClusterPolicy CR to add that configmap in the DCGM exporter configuration. The installation of the configmap is done by the new version of the Console Plugin NVIDIA GPU Helm Chart, but the ClusterPolicy CR editing is done by the user.

  3. View the deployed resources:

    $ oc -n nvidia-gpu-operator get all -l app.kubernetes.io/name=console-plugin-nvidia-gpu
    Example output
    NAME                                             READY   STATUS    RESTARTS   AGE
    pod/console-plugin-nvidia-gpu-7dc9cfb5df-ztksx   1/1     Running   0          2m6s
    
    NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/console-plugin-nvidia-gpu   ClusterIP   172.30.240.138   <none>        9443/TCP   2m6s
    
    NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/console-plugin-nvidia-gpu   1/1     1            1           2m6s
    
    NAME                                                   DESIRED   CURRENT   READY   AGE
    replicaset.apps/console-plugin-nvidia-gpu-7dc9cfb5df   1         1         1       2m6s

Using the NVIDIA GPU administration dashboard

After deploying the OpenShift Console NVIDIA GPU plugin, log in to the OpenShift Container Platform web console using your login credentials to access the Administrator perspective.

To view the changes, you need to refresh the console to see the GPUs tab under Compute.

Viewing the cluster GPU overview

You can view the status of your cluster GPUs in the Overview page by selecting Overview in the Home section.

The Overview page provides information about the cluster GPUs, including:

  • Details about the GPU providers

  • Status of the GPUs

  • Cluster utilization of the GPUs

Viewing the GPUs dashboard

You can view the NVIDIA GPU administration dashboard by selecting GPUs in the Compute section of the OpenShift Console.

Charts on the GPUs dashboard include:

  • GPU utilization: Shows the ratio of time the graphics engine is active and is based on the DCGM_FI_PROF_GR_ENGINE_ACTIVE metric.

  • Memory utilization: Shows the memory being used by the GPU and is based on the DCGM_FI_DEV_MEM_COPY_UTIL metric.

  • Encoder utilization: Shows the video encoder rate of utilization and is based on the DCGM_FI_DEV_ENC_UTIL metric.

  • Decoder utilization: Encoder utilization: Shows the video decoder rate of utilization and is based on the DCGM_FI_DEV_DEC_UTIL metric.

  • Power consumption: Shows the average power usage of the GPU in Watts and is based on the DCGM_FI_DEV_POWER_USAGE metric.

  • GPU temperature: Shows the current GPU temperature and is based on the DCGM_FI_DEV_GPU_TEMP metric. The maximum is set to 110, which is an empirical number, as the actual number is not exposed via a metric.

  • GPU clock speed: Shows the average clock speed utilized by the GPU and is based on the DCGM_FI_DEV_SM_CLOCK metric.

  • Memory clock speed: Shows the average clock speed utilized by memory and is based on the DCGM_FI_DEV_MEM_CLOCK metric.

Viewing the GPU Metrics

You can view the metrics for the GPUs by selecting the metric at the bottom of each GPU to view the Metrics page.

On the Metrics page, you can:

  • Specify a refresh rate for the metrics

  • Add, run, disable, and delete queries

  • Insert Metrics

  • Reset the zoom view