This is a cache of https://docs.okd.io/4.13/virt/support/monitoring/virt-running-cluster-checkups.html. It is a snapshot of the page at 2024-11-24T01:31:28.253+0000.
OpenShift cluster checkup framework - Support | Virtualization | OKD 4.13
×

OKD Virtualization includes predefined checkups that can be used for cluster maintenance and troubleshooting.

The OKD cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

About the OKD cluster checkup framework

A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.

By using predefined checkups, cluster administrators and developers can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.

Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.

You must always:

  • Verify that the checkup image is from a trustworthy source before applying it.

  • Review the checkup permissions before creating the Role and RoleBinding objects.

Virtual machine latency checkup

You use a predefined checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The latency checkup uses the ping utility.

You run a latency checkup by performing the following steps:

  1. Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup.

  2. Create a config map to provide the input to run the checkup and to store the results.

  3. Create a job to run the checkup.

  4. Review the results in the config map.

  5. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.

  6. When you are finished, delete the latency checkup resources.

Prerequisites
  • You installed the OpenShift cli (oc).

  • The cluster has at least two worker nodes.

  • The Multus Container Network Interface (CNI) plugin is installed on the cluster.

  • You configured a network attachment definition for a namespace.

Procedure
  1. Create a ServiceAccount, Role, and RoleBinding manifest for the latency checkup:

    Example role manifest file
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: vm-latency-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kubevirt-vm-latency-checker
    rules:
    - apiGroups: ["kubevirt.io"]
      resources: ["virtualmachineinstances"]
      verbs: ["get", "create", "delete"]
    - apiGroups: ["subresources.kubevirt.io"]
      resources: ["virtualmachineinstances/console"]
      verbs: ["get"]
    - apiGroups: ["k8s.cni.cncf.io"]
      resources: ["network-attachment-definitions"]
      verbs: ["get"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubevirt-vm-latency-checker
    subjects:
    - kind: ServiceAccount
      name: vm-latency-checkup-sa
    roleRef:
      kind: Role
      name: kubevirt-vm-latency-checker
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kiagnose-configmap-access
    rules:
    - apiGroups: [ "" ]
      resources: [ "configmaps" ]
      verbs: ["get", "update"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kiagnose-configmap-access
    subjects:
    - kind: ServiceAccount
      name: vm-latency-checkup-sa
    roleRef:
      kind: Role
      name: kiagnose-configmap-access
      apiGroup: rbac.authorization.k8s.io
  2. Apply the ServiceAccount, Role, and RoleBinding manifest:

    $ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml (1)
    1 <target_namespace> is the namespace where the checkup is to be run. This must be an existing namespace where the NetworkAttachmentDefinition object resides.
  3. Create a ConfigMap manifest that contains the input parameters for the checkup:

    Example input config map
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevirt-vm-latency-checkup-config
    data:
      spec.timeout: 5m
      spec.param.networkAttachmentDefinitionNamespace: <target_namespace>
      spec.param.networkAttachmentDefinitionName: "blue-network" (1)
      spec.param.maxDesiredLatencyMilliseconds: "10" (2)
      spec.param.sampleDurationSeconds: "5" (3)
      spec.param.sourceNode: "worker1" (4)
      spec.param.targetNode: "worker2" (5)
    1 The name of the NetworkAttachmentDefinition object.
    2 Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
    3 Optional: The duration of the latency check, in seconds.
    4 Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the spec.param.targetNode field cannot be empty.
    5 Optional: When specified, latency is measured from the source node to this node.
  4. Apply the config map manifest in the target namespace:

    $ oc apply -n <target_namespace> -f <latency_config_map>.yaml
  5. Create a Job manifest to run the checkup:

    Example job manifest
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: kubevirt-vm-latency-checkup
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccountName: vm-latency-checkup-sa
          restartPolicy: Never
          containers:
            - name: vm-latency-checkup
              image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.13.0
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop: ["ALL"]
                runAsNonRoot: true
                seccompProfile:
                  type: "RuntimeDefault"
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: <target_namespace>
                - name: CONFIGMAP_NAME
                  value: kubevirt-vm-latency-checkup-config
                - name: POD_UID
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.uid
  6. Apply the Job manifest:

    $ oc apply -n <target_namespace> -f <latency_job>.yaml
  7. Wait for the job to complete:

    $ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
  8. Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the spec.param.maxDesiredLatencyMilliseconds attribute, the checkup fails and returns an error.

    $ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml
    Example output config map (success)
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevirt-vm-latency-checkup-config
      namespace: <target_namespace>
    data:
      spec.timeout: 5m
      spec.param.networkAttachmentDefinitionNamespace: <target_namespace>
      spec.param.networkAttachmentDefinitionName: "blue-network"
      spec.param.maxDesiredLatencyMilliseconds: "10"
      spec.param.sampleDurationSeconds: "5"
      spec.param.sourceNode: "worker1"
      spec.param.targetNode: "worker2"
      status.succeeded: "true"
      status.failureReason: ""
      status.completionTimestamp: "2022-01-01T09:00:00Z"
      status.startTimestamp: "2022-01-01T09:00:07Z"
      status.result.avgLatencyNanoSec: "177000"
      status.result.maxLatencyNanoSec: "244000" (1)
      status.result.measurementDurationSec: "5"
      status.result.minLatencyNanoSec: "135000"
      status.result.sourceNode: "worker1"
      status.result.targetNode: "worker2"
    1 The maximum measured latency in nanoseconds.
  9. Optional: To view the detailed job log in case of checkup failure, use the following command:

    $ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
  10. Delete the job and config map that you previously created by running the following commands:

    $ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
    $ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
  11. Optional: If you do not plan to run another checkup, delete the roles manifest:

    $ oc delete -f <latency_sa_roles_rolebinding>.yaml

DPDK checkup

Use a predefined checkup to verify that your OKD cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator pod and a VM running a test DPDK application.

You run a DPDK checkup by performing the following steps:

  1. Create a service account, role, and role bindings for the DPDK checkup and a service account for the traffic generator pod.

  2. Create a security context constraints resource for the traffic generator pod.

  3. Create a config map to provide the input to run the checkup and to store the results.

  4. Create a job to run the checkup.

  5. Review the results in the config map.

  6. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.

  7. When you are finished, delete the DPDK checkup resources.

Prerequisites
  • You have access to the cluster as a user with cluster-admin permissions.

  • You have installed the OpenShift cli (oc).

  • You have configured the compute nodes to run DPDK applications on VMs with zero packet loss.

The traffic generator pod created by the checkup has elevated privileges:

  • It runs as root.

  • It has a bind mount to the node’s file system.

The container image of the traffic generator is pulled from the upstream Project Quay container registry.

Procedure
  1. Create a ServiceAccount, Role, and RoleBinding manifest for the DPDK checkup and the traffic generator pod:

    Example service account, role, and rolebinding manifest file
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: dpdk-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kiagnose-configmap-access
    rules:
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs: [ "get", "update" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kiagnose-configmap-access
    subjects:
      - kind: ServiceAccount
        name: dpdk-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kiagnose-configmap-access
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kubevirt-dpdk-checker
    rules:
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachineinstances" ]
        verbs: [ "create", "get", "delete" ]
      - apiGroups: [ "subresources.kubevirt.io" ]
        resources: [ "virtualmachineinstances/console" ]
        verbs: [ "get" ]
      - apiGroups: [ "" ]
        resources: [ "pods" ]
        verbs: [ "create", "get", "delete" ]
      - apiGroups: [ "" ]
        resources: [ "pods/exec" ]
        verbs: [ "create" ]
      - apiGroups: [ "k8s.cni.cncf.io" ]
        resources: [ "network-attachment-definitions" ]
        verbs: [ "get" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubevirt-dpdk-checker
    subjects:
      - kind: ServiceAccount
        name: dpdk-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubevirt-dpdk-checker
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: dpdk-checkup-traffic-gen-sa
  2. Apply the ServiceAccount, Role, and RoleBinding manifest:

    $ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
  3. Create a SecurityContextConstraints manifest for the traffic generator pod:

    Example security context constraints manifest
    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: dpdk-checkup-traffic-gen
    allowHostDirVolumePlugin: true
    allowHostIPC: false
    allowHostNetwork: false
    allowHostPID: false
    allowHostPorts: false
    allowPrivilegeEscalation: false
    allowPrivilegedContainer: false
    allowedCapabilities:
    - IPC_LOCK
    - NET_ADMIN
    - NET_RAW
    - SYS_RESOURCE
    defaultAddCapabilities: null
    fsGroup:
      type: RunAsAny
    groups: []
    readOnlyRootFilesystem: false
    requiredDropCapabilities: null
    runAsUser:
      type: RunAsAny
    seLinuxContext:
      type: RunAsAny
    seccompProfiles:
    - runtime/default
    - unconfined
    supplementalGroups:
      type: RunAsAny
    users:
    - system:serviceaccount:dpdk-checkup-ns:dpdk-checkup-traffic-gen-sa
  4. Apply the SecurityContextConstraints manifest:

    $ oc apply -f <dpdk_scc>.yaml
  5. Create a ConfigMap manifest that contains the input parameters for the checkup:

    Example input config map
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: dpdk-checkup-config
    data:
      spec.timeout: 10m
      spec.param.networkAttachmentDefinitionName: <network_name> (1)
      spec.param.trafficGeneratorRuntimeClassName: <runtimeclass_name> (2)
      spec.param.trafficGeneratorImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.1.1" (3)
      spec.param.vmContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.1.1" (4)
    1 The name of the NetworkAttachmentDefinition object.
    2 The RuntimeClass resource that the traffic generator pod uses.
    3 The container image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry.
    4 The container disk image for the VM. In this example, the image is pulled from the upstream Project Quay Container Registry.
  6. Apply the ConfigMap manifest in the target namespace:

    $ oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
  7. Create a Job manifest to run the checkup:

    Example job manifest
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: dpdk-checkup
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccountName: dpdk-checkup-sa
          restartPolicy: Never
          containers:
            - name: dpdk-checkup
              image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.13.0
              imagePullPolicy: Always
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop: ["ALL"]
                runAsNonRoot: true
                seccompProfile:
                  type: "RuntimeDefault"
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: <target-namespace>
                - name: CONFIGMAP_NAME
                  value: dpdk-checkup-config
                - name: POD_UID
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.uid
  8. Apply the Job manifest:

    $ oc apply -n <target_namespace> -f <dpdk_job>.yaml
  9. Wait for the job to complete:

    $ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m
  10. Review the results of the checkup by running the following command:

    $ oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml
    Example output config map (success)
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: dpdk-checkup-config
    data:
      spec.timeout: 1h2m
      spec.param.NetworkAttachmentDefinitionName: "mlx-dpdk-network-1"
      spec.param.trafficGeneratorRuntimeClassName: performance-performance-zeus10
      spec.param.trafficGeneratorImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.1.1"
      spec.param.vmContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.1.1"
      status.succeeded: true
      status.failureReason: " "
      status.startTimestamp: 2022-12-21T09:33:06+00:00
      status.completionTimestamp: 2022-12-21T11:33:06+00:00
      status.result.actualTrafficGeneratorTargetNode: worker-dpdk1
      status.result.actualDPDKVMTargetNode: worker-dpdk2
      status.result.dropRate: 0
  11. Delete the job and config map that you previously created by running the following commands:

    $ oc delete job -n <target_namespace> dpdk-checkup
    $ oc delete config-map -n <target_namespace> dpdk-checkup-config
  12. Optional: If you do not plan to run another checkup, delete the ServiceAccount, Role, and RoleBinding manifest:

    $ oc delete -f <dpdk_sa_roles_rolebinding>.yaml

DPDK checkup config map parameters

The following table shows the mandatory and optional parameters that you can set in the data stanza of the input ConfigMap manifest when you run a cluster DPDK readiness checkup:

Table 1. DPDK checkup config map parameters
Parameter Description Is Mandatory

spec.timeout

The time, in minutes, before the checkup fails.

True

spec.param.networkAttachmentDefinitionName

The name of the NetworkAttachmentDefinition object of the SR-IOV NICs connected.

True

spec.param.trafficGeneratorRuntimeClassName

The RuntimeClass resource that the traffic generator pod uses.

True

spec.param.trafficGeneratorImage

The container image for the traffic generator. The default value is quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:main.

False

spec.param.trafficGeneratorNodeSelector

The node on which the traffic generator pod is to be scheduled. The node should be configured to allow DPDK traffic.

False

spec.param.trafficGeneratorPacketsPerSecond

The number of packets per second, in kilo (k) or million(m). The default value is 14m.

False

spec.param.trafficGeneratorEastMacAddress

The MAC address of the NIC connected to the traffic generator pod or VM. The default value is a random MAC address in the format 50:xx:xx:xx:xx:01.

False

spec.param.trafficGeneratorWestMacAddress

The MAC address of the NIC connected to the traffic generator pod or VM. The default value is a random MAC address in the format 50:xx:xx:xx:xx:02.

False

spec.param.vmContainerDiskImage

The container disk image for the VM. The default value is quay.io/kiagnose/kubevirt-dpdk-checkup-vm:main.

False

spec.param.DPDKLabelSelector

The label of the node on which the VM runs. The node should be configured to allow DPDK traffic.

False

spec.param.DPDKEastMacAddress

The MAC address of the NIC that is connected to the VM. The default value is a random MAC address in the format 60:xx:xx:xx:xx:01.

False

spec.param.DPDKWestMacAddress

The MAC address of the NIC that is connected to the VM. The default value is a random MAC address in the format 60:xx:xx:xx:xx:02.

False

spec.param.testDuration

The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes.

False

spec.param.portBandwidthGB

The maximum bandwidth of the SR-IOV NIC. The default value is 10GB.

False

spec.param.verbose

When set to true, it increases the verbosity of the checkup log. The default value is false.

False

Building a container disk image for Fedora virtual machines

You can build a custom Fedora 8 OS image in qcow2 format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the spec.param.vmContainerDiskImage attribute of the DPDK checkup config map.

To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a Fedora 8 VM that can be used to build custom Fedora images.

Prerequisites
  • The image builder VM must run Fedora 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the /var directory.

  • You have installed the image builder tool and its cli (composer-cli) on the VM.

  • You have installed the virt-customize tool:

    # dnf install libguestfs-tools
  • You have installed the Podman cli tool (podman).

Procedure
  1. Verify that you can build a Fedora 8.7 image:

    # composer-cli distros list

    To run the composer-cli commands as non-root, add your user to the weldr or root groups:

    # usermod -a -G weldr user
    $ newgrp weldr
  2. Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:

    $ cat << EOF > dpdk-vm.toml
    name = "dpdk_image"
    description = "Image to use with the DPDK checkup"
    version = "0.0.1"
    distro = "rhel-87"
    
    [[packages]]
    name = "dpdk"
    
    [[packages]]
    name = "dpdk-tools"
    
    [[packages]]
    name = "driverctl"
    
    [[packages]]
    name = "tuned-profiles-cpu-partitioning"
    
    [customizations.kernel]
    append = "default_hugepagesz=1GB hugepagesz=1G hugepages=8 isolcpus=2-7"
    
    [customizations.services]
    disabled = ["NetworkManager-wait-online", "sshd"]
    EOF
  3. Push the blueprint file to the image builder tool by running the following command:

    # composer-cli blueprints push dpdk-vm.toml
  4. Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.

    # composer-cli compose start dpdk_image qcow2
  5. Wait for the compose process to complete. The compose status must show FINISHED before you can continue to the next step.

    # composer-cli compose status
  6. Enter the following command to download the qcow2 image file by specifying its UUID:

    # composer-cli compose image <UUID>
  7. Create the customization scripts by running the following commands:

    $ cat <<EOF >customize-vm
    echo  isolated_cores=2-7 > /etc/tuned/cpu-partitioning-variables.conf
    tuned-adm profile cpu-partitioning
    echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf
    EOF
    $ cat <<EOF >first-boot
    driverctl set-override 0000:06:00.0 vfio-pci
    driverctl set-override 0000:07:00.0 vfio-pci
    
    mkdir /mnt/huge
    mount /mnt/huge --source nodev -t hugetlbfs -o pagesize=1GB
    EOF
  8. Use the virt-customize tool to customize the image generated by the image builder tool:

    $ virt-customize -a <UUID>.qcow2 --run=customize-vm --firstboot=first-boot --selinux-relabel
  9. To create a Dockerfile that contains all the commands to build the container disk image, enter the following command:

    $ cat << EOF > Dockerfile
    FROM scratch
    COPY <uuid>-disk.qcow2 /disk/
    EOF

    where:

    <uuid>-disk.qcow2

    Specifies the name of the custom image in qcow2 format.

  10. Build and tag the container by running the following command:

    $ podman build . -t dpdk-rhel:latest
  11. Push the container disk image to a registry that is accessible from your cluster by running the following command:

    $ podman push dpdk-rhel:latest
  12. Provide a link to the container disk image in the spec.param.vmContainerDiskImage attribute in the DPDK checkup config map.