This is a cache of https://docs.okd.io/4.11/virt/logging_events_monitoring/virt-running-cluster-checkups.html. It is a snapshot of the page at 2024-11-21T21:29:52.016+0000.
Running OpenShift cluster checkups - Logging, events, and monitoring | Virtualization | OKD 4.11
×

OKD Virtualization 4.11 includes a diagnostic framework to run predefined checkups that can be used for cluster maintenance and troubleshooting.

The OKD cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

About the OKD cluster checkup framework

A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.

By using predefined checkups, cluster administrators can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.

Running a predefined checkup in the cluster involves setting up the namespace and service account for the framework, creating the ClusterRole and ClusterRoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.

You must always:

  • Verify that the checkup image is from a trustworthy source before applying it.

  • Review the checkup permissions before creating the ClusterRole objects.

  • Verify the names of the ClusterRole objects in the config map. This is because the framework automatically binds these permissions to the checkup instance.

Checking network connectivity and latency for virtual machines on a secondary network

As a cluster administrator, you use a predefined checkup to verify network connectivity and measure latency between virtual machines (VMs) that are attached to a secondary network interface.

To run a checkup for the first time, follow the steps in the procedure.

If you have previously run a checkup, skip to step 5 of the procedure because the steps to install the framework and enable permissions for the checkup are not required.

Prerequisites
  • You installed the OpenShift CLI (oc).

  • You logged in to the cluster as a user with the cluster-admin role.

  • The cluster has at least two worker nodes.

  • The Multus Container Network Interface (CNI) plugin is installed on the cluster.

  • You configured a network attachment definition for a namespace.

Procedure
  1. Create a configuration file that contains the resources to set up the framework. This includes a namespace and service account for the framework, and the ClusterRole and ClusterRoleBinding objects to define permissions for the service account.

    Example framework manifest file
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: kiagnose
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: kiagnose
      namespace: kiagnose
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: kiagnose
    rules:
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs:
          - get
          - list
          - create
          - update
          - patch
      - apiGroups: [ "" ]
        resources: [ "namespaces" ]
        verbs:
          - get
          - list
          - create
          - delete
          - watch
      - apiGroups: [ "" ]
        resources: [ "serviceaccounts" ]
        verbs:
          - get
          - list
          - create
      - apiGroups: [ "rbac.authorization.k8s.io" ]
        resources:
          - roles
          - rolebindings
          - clusterrolebindings
        verbs:
          - get
          - list
          - create
          - delete
      - apiGroups: [ "rbac.authorization.k8s.io" ]
        resources:
          - clusterroles
        verbs:
          - get
          - list
          - create
          - bind
      - apiGroups: [ "batch" ]
        resources: [ "jobs" ]
        verbs:
          - get
          - list
          - create
          - delete
          - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kiagnose
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: kiagnose
    subjects:
      - kind: ServiceAccount
        name: kiagnose
        namespace: kiagnose
    ...
  2. Apply the framework manifest:

    $ oc apply -f <framework_manifest>.yaml
  3. Create a configuration file that contains the ClusterRole and Role objects with permissions that the checkup requires for cluster access:

    Example cluster role manifest file
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: kubevirt-vm-latency-checker
    rules:
    - apiGroups: ["kubevirt.io"]
      resources: ["virtualmachineinstances"]
      verbs: ["get", "create", "delete"]
    - apiGroups: ["subresources.kubevirt.io"]
      resources: ["virtualmachineinstances/console"]
      verbs: ["get"]
    - apiGroups: ["k8s.cni.cncf.io"]
      resources: ["network-attachment-definitions"]
      verbs: ["get"]
  4. Apply the checkup roles manifest:

    $ oc apply -f <latency_roles>.yaml
  5. Create a configmap manifest that contains the input parameters for the checkup. The config map provides the input for the framework to run the checkup and also stores the results of the checkup.

    Example input config map
    apiVersion: v1
    kind: configmap
    metadata:
      name: kubevirt-vm-latency-checkup
      namespace: kiagnose
    data:
      spec.image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup:v4.11.0
      spec.timeout: 10m
      spec.clusterRoles: |
        kubevirt-vmis-manager
      spec.param.network_attachment_definition_namespace: "default" (1)
      spec.param.network_attachment_definition_name: "bridge-network" (2)
      spec.param.max_desired_latency_milliseconds: "10" (3)
      spec.param.sample_duration_seconds: "5" (4)
    1 The namespace where the NetworkAttachmentDefinition object resides.
    2 The name of the NetworkAttachmentDefinition object.
    3 Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the check fails.
    4 Optional: The duration of the latency check, in seconds.
  6. Create the config map in the framework’s namespace:

    $ oc apply -f <latency_config_map>.yaml
  7. Create a Job object to run the checkup:

    Example job manifest
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: kubevirt-vm-latency-checkup
      namespace: kiagnose
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccount: kiagnose
          restartPolicy: Never
          containers:
            - name: framework
              image: registry.redhat.io/container-native-virtualization/checkup-framework:v4.11.0
              env:
                - name: configmap_NAMESPACE
                  value: kiagnose
                - name: configmap_NAME
                  value: kubevirt-vm-latency-checkup
  8. Apply the Job manifest. The checkup uses the ping utility to verify connectivity and measure latency.

    $ oc apply -f <latency_job>.yaml
  9. Wait for the job to complete:

    $ oc wait --for=condition=complete --timeout=10m job.batch/kubevirt-vm-latency-checkup -n kiagnose
  10. Review the results of the latency checkup by retrieving the status of the configmap object. If the measured latency is greater than the value of the spec.param.max_desired_latency_milliseconds attribute, the checkup fails and returns an error.

    $ oc get configmap kubevirt-vm-latency-checkup -n kiagnose -o yaml
    Example output config map (success)
    apiVersion: v1
    kind: configmap
    metadata:
      name: kubevirt-vm-latency-checkup
      namespace: kiagnose
    ...
      status.succeeded: "true"
      status.failureReason: ""
      status.result.minLatencyNanoSec: 2000
      status.result.maxLatencyNanoSec: 3000
      status.result.avgLatencyNanoSec: 2500
      status.results.measurementDurationSec: 300
    ...
  11. Delete the framework and checkup resources that you previously created. This includes the job, config map, cluster role, and framework manifest files.

    Do not delete the framework and cluster role manifest files if you plan to run another checkup.

    $ oc delete -f <file_name>.yaml