This is a cache of https://docs.openshift.com/container-platform/4.17/virt/monitoring/virt-running-cluster-checkups.html. It is a snapshot of the page at 2024-11-27T08:42:10.611+0000.
Cluster checkup framework - Monitoring | Virtualization | OpenShift Container Platform 4.17
×

OpenShift Virtualization includes the following predefined checkups that can be used for cluster maintenance and troubleshooting:

  • Latency checkup, which verifies network connectivity and measures latency between two virtual machines (VMs) that are attached to a secondary network interface.

    Before you run a latency checkup, you must first create a bridge interface on the cluster nodes to connect the VM’s secondary interface to any interface on the node. If you do not create a bridge interface, the VMs do not start and the job fails.

  • Storage checkup, which verifies if the cluster storage is optimally configured for OpenShift Virtualization.

  • DPDK checkup, which verifies that a node can run a VM with a Data Plane Development Kit (DPDK) workload with zero packet loss.

The OpenShift Virtualization cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

About the OpenShift Virtualization cluster checkup framework

A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.

By using predefined checkups, cluster administrators and developers can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.

Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.

You must always:

  • Verify that the checkup image is from a trustworthy source before applying it.

  • Review the checkup permissions before creating the Role and RoleBinding objects.

Running checkups by using the web console

Use the following procedures the first time you run checkups by using the web console. For additional checkups, click Run checkup on either checkup tab, and select the appropriate checkup from the drop down menu.

Running a latency checkup by using the web console

Run a latency checkup to verify network connectivity and measure the latency between two virtual machines attached to a secondary network interface.

Prerequisites
  • You must add a NetworkAttachmentDefinition to the namespace.

Procedure
  1. Navigate to VirtualizationCheckups in the web console.

  2. click the Network latency tab.

  3. click Install permissions.

  4. click Run checkup.

  5. Enter a name for the checkup in the Name field.

  6. Select a NetworkAttachmentDefinition from the drop-down menu.

  7. Optional: Set a duration for the latency sample in the Sample duration (seconds) field.

  8. Optional: Define a maximum latency time interval by enabling Set maximum desired latency (milliseconds) and defining the time interval.

  9. Optional: Target specific nodes by enabling Select nodes and specifying the Source node and Target node.

  10. click Run.

You can view the status of the latency checkup in the Checkups list on the Latency checkup tab. click on the name of the checkup for more details.

Running a storage checkup by using the web console

Run a storage checkup to validate that storage is working correctly for virtual machines.

Procedure
  1. Navigate to VirtualizationCheckups in the web console.

  2. click the Storage tab.

  3. click Install permissions.

  4. click Run checkup.

  5. Enter a name for the checkup in the Name field.

  6. Enter a timeout value for the checkup in the Timeout (minutes) fields.

  7. click Run.

You can view the status of the storage checkup in the Checkups list on the Storage tab. click on the name of the checkup for more details.

Running checkups by using the command line

Use the following procedures the first time you run checkups by using the command line.

Running a latency checkup by using the command line

You use a predefined checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The latency checkup uses the ping utility.

You run a latency checkup by performing the following steps:

  1. Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup.

  2. Create a config map to provide the input to run the checkup and to store the results.

  3. Create a job to run the checkup.

  4. Review the results in the config map.

  5. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.

  6. When you are finished, delete the latency checkup resources.

Prerequisites
  • You installed the OpenShift cli (oc).

  • The cluster has at least two worker nodes.

  • You configured a network attachment definition for a namespace.

Procedure
  1. Create a ServiceAccount, Role, and RoleBinding manifest for the latency checkup:

    Example role manifest file
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: vm-latency-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kubevirt-vm-latency-checker
    rules:
    - apiGroups: ["kubevirt.io"]
      resources: ["virtualmachineinstances"]
      verbs: ["get", "create", "delete"]
    - apiGroups: ["subresources.kubevirt.io"]
      resources: ["virtualmachineinstances/console"]
      verbs: ["get"]
    - apiGroups: ["k8s.cni.cncf.io"]
      resources: ["network-attachment-definitions"]
      verbs: ["get"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubevirt-vm-latency-checker
    subjects:
    - kind: ServiceAccount
      name: vm-latency-checkup-sa
    roleRef:
      kind: Role
      name: kubevirt-vm-latency-checker
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kiagnose-configmap-access
    rules:
    - apiGroups: [ "" ]
      resources: [ "configmaps" ]
      verbs: ["get", "update"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kiagnose-configmap-access
    subjects:
    - kind: ServiceAccount
      name: vm-latency-checkup-sa
    roleRef:
      kind: Role
      name: kiagnose-configmap-access
      apiGroup: rbac.authorization.k8s.io
  2. Apply the ServiceAccount, Role, and RoleBinding manifest:

    $ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml (1)
    1 <target_namespace> is the namespace where the checkup is to be run. This must be an existing namespace where the NetworkAttachmentDefinition object resides.
  3. Create a ConfigMap manifest that contains the input parameters for the checkup:

    Example input config map
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevirt-vm-latency-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-vm-latency
    data:
      spec.timeout: 5m
      spec.param.networkAttachmentDefinitionNamespace: <target_namespace>
      spec.param.networkAttachmentDefinitionName: "blue-network" (1)
      spec.param.maxDesiredLatencyMilliseconds: "10" (2)
      spec.param.sampleDurationSeconds: "5" (3)
      spec.param.sourceNode: "worker1" (4)
      spec.param.targetNode: "worker2" (5)
    1 The name of the NetworkAttachmentDefinition object.
    2 Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
    3 Optional: The duration of the latency check, in seconds.
    4 Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the spec.param.targetNode field cannot be empty.
    5 Optional: When specified, latency is measured from the source node to this node.
  4. Apply the config map manifest in the target namespace:

    $ oc apply -n <target_namespace> -f <latency_config_map>.yaml
  5. Create a Job manifest to run the checkup:

    Example job manifest
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: kubevirt-vm-latency-checkup
      labels:
        kiagnose/checkup-type: kubevirt-vm-latency
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccountName: vm-latency-checkup-sa
          restartPolicy: Never
          containers:
            - name: vm-latency-checkup
              image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.17.0
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop: ["ALL"]
                runAsNonRoot: true
                seccompProfile:
                  type: "RuntimeDefault"
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: <target_namespace>
                - name: CONFIGMAP_NAME
                  value: kubevirt-vm-latency-checkup-config
                - name: POD_UID
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.uid
  6. Apply the Job manifest:

    $ oc apply -n <target_namespace> -f <latency_job>.yaml
  7. Wait for the job to complete:

    $ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
  8. Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the spec.param.maxDesiredLatencyMilliseconds attribute, the checkup fails and returns an error.

    $ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml
    Example output config map (success)
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevirt-vm-latency-checkup-config
      namespace: <target_namespace>
      labels:
        kiagnose/checkup-type: kubevirt-vm-latency
    data:
      spec.timeout: 5m
      spec.param.networkAttachmentDefinitionNamespace: <target_namespace>
      spec.param.networkAttachmentDefinitionName: "blue-network"
      spec.param.maxDesiredLatencyMilliseconds: "10"
      spec.param.sampleDurationSeconds: "5"
      spec.param.sourceNode: "worker1"
      spec.param.targetNode: "worker2"
      status.succeeded: "true"
      status.failureReason: ""
      status.completionTimestamp: "2022-01-01T09:00:00Z"
      status.startTimestamp: "2022-01-01T09:00:07Z"
      status.result.avgLatencyNanoSec: "177000"
      status.result.maxLatencyNanoSec: "244000" (1)
      status.result.measurementDurationSec: "5"
      status.result.minLatencyNanoSec: "135000"
      status.result.sourceNode: "worker1"
      status.result.targetNode: "worker2"
    1 The maximum measured latency in nanoseconds.
  9. Optional: To view the detailed job log in case of checkup failure, use the following command:

    $ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
  10. Delete the job and config map that you previously created by running the following commands:

    $ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
    $ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
  11. Optional: If you do not plan to run another checkup, delete the roles manifest:

    $ oc delete -f <latency_sa_roles_rolebinding>.yaml

Running a storage checkup by using the command line

Use a predefined checkup to verify that the OpenShift Container Platform cluster storage is configured optimally to run OpenShift Virtualization workloads.

Prerequisites
  • You have installed the OpenShift cli (oc).

  • The cluster administrator has created the required cluster-reader permissions for the storage checkup service account and namespace, such as in the following example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubevirt-storage-checkup-clustereader
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-reader
    subjects:
    - kind: ServiceAccount
      name: storage-checkup-sa
      namespace: <target_namespace> (1)
    1 The namespace where the checkup is to be run.
Procedure
  1. Create a ServiceAccount, Role, and RoleBinding manifest file for the storage checkup:

    Example service account, role, and rolebinding manifest
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: storage-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: storage-checkup-role
    rules:
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs: ["get", "update"]
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachines" ]
        verbs: [ "create", "delete" ]
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachineinstances" ]
        verbs: [ "get" ]
      - apiGroups: [ "subresources.kubevirt.io" ]
        resources: [ "virtualmachineinstances/addvolume", "virtualmachineinstances/removevolume" ]
        verbs: [ "update" ]
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachineinstancemigrations" ]
        verbs: [ "create" ]
      - apiGroups: [ "cdi.kubevirt.io" ]
        resources: [ "datavolumes" ]
        verbs: [ "create", "delete" ]
      - apiGroups: [ "" ]
        resources: [ "persistentvolumeclaims" ]
        verbs: [ "delete" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: storage-checkup-role
    subjects:
      - kind: ServiceAccount
        name: storage-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: storage-checkup-role
  2. Apply the ServiceAccount, Role, and RoleBinding manifest in the target namespace:

    $ oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml
  3. Create a ConfigMap and Job manifest file. The config map contains the input parameters for the checkup job.

    Example input config map and job manifest
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: storage-checkup-config
      namespace: $CHECKUP_NAMESPACE
    data:
      spec.timeout: 10m
      spec.param.storageClass: ocs-storagecluster-ceph-rbd-virtualization
      spec.param.vmiTimeout: 3m
    ---
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: storage-checkup
      namespace: $CHECKUP_NAMESPACE
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccount: storage-checkup-sa
          restartPolicy: Never
          containers:
            - name: storage-checkup
              image: quay.io/kiagnose/kubevirt-storage-checkup:main
              imagePullPolicy: Always
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: $CHECKUP_NAMESPACE
                - name: CONFIGMAP_NAME
                  value: storage-checkup-config
  4. Apply the ConfigMap and Job manifest file in the target namespace to run the checkup:

    $ oc apply -n <target_namespace> -f <storage_configmap_job>.yaml
  5. Wait for the job to complete:

    $ oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m
  6. Review the results of the checkup by running the following command:

    $ oc get configmap storage-checkup-config -n <target_namespace> -o yaml
    Example output config map (success)
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: storage-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-storage
    data:
      spec.timeout: 10m
      status.succeeded: "true" (1)
      status.failureReason: "" (2)
      status.startTimestamp: "2023-07-31T13:14:38Z" (3)
      status.completionTimestamp: "2023-07-31T13:19:41Z" (4)
      status.result.cnvVersion: 4.17.2 (5)
      status.result.defaultStorageClass: trident-nfs (6)
      status.result.goldenImagesNoDataSource: <data_import_cron_list> (7)
      status.result.goldenImagesNotUpToDate: <data_import_cron_list> (8)
      status.result.ocpVersion: 4.17.0 (9)
      status.result.pvcBound: "true" (10)
      status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list> (11)
      status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list> (12)
      status.result.storageProfilesWithSmartClone: <storage_profile_list> (13)
      status.result.storageProfilesWithSpecClaimPropertySets: <storage_profile_list> (14)
      status.result.storageProfilesWithRWX: |-
        ocs-storagecluster-ceph-rbd
        ocs-storagecluster-ceph-rbd-virtualization
        ocs-storagecluster-cephfs
        trident-iscsi
        trident-minio
        trident-nfs
        windows-vms
      status.result.vmBootFromGoldenImage: VMI "vmi-under-test-dhkb8" successfully booted
      status.result.vmHotplugVolume: |-
        VMI "vmi-under-test-dhkb8" hotplug volume ready
        VMI "vmi-under-test-dhkb8" hotplug volume removed
      status.result.vmLiveMigration: VMI "vmi-under-test-dhkb8" migration completed
      status.result.vmVolumeClone: 'DV cloneType: "csi-clone"'
      status.result.vmsWithNonVirtRbdStorageClass: <vm_list> (15)
      status.result.vmsWithUnsetEfsStorageClass: <vm_list> (16)
    1 Specifies if the checkup is successful (true) or not (false).
    2 The reason for failure if the checkup fails.
    3 The time when the checkup started, in RFC 3339 time format.
    4 The time when the checkup has completed, in RFC 3339 time format.
    5 The OpenShift Virtualization version.
    6 Specifies if there is a default storage class.
    7 The list of golden images whose data source is not ready.
    8 The list of golden images whose data import cron is not up-to-date.
    9 The OpenShift Container Platform version.
    10 Specifies if a PVC of 10Mi has been created and bound by the provisioner.
    11 The list of storage profiles using snapshot-based clone but missing VolumeSnapshotClass.
    12 The list of storage profiles with unknown provisioners.
    13 The list of storage profiles with smart clone support (CSI/snapshot).
    14 The list of storage profiles spec-overriden claimPropertySets.
    15 The list of virtual machines that use the Ceph RBD storage class when the virtualization storage class exists.
    16 The list of virtual machines that use an Elastic File Store (EFS) storage class where the GID and UID are not set in the storage class.
  7. Delete the job and config map that you previously created by running the following commands:

    $ oc delete job -n <target_namespace> storage-checkup
    $ oc delete config-map -n <target_namespace> storage-checkup-config
  8. Optional: If you do not plan to run another checkup, delete the ServiceAccount, Role, and RoleBinding manifest:

    $ oc delete -f <storage_sa_roles_rolebinding>.yaml

Running a DPDK checkup by using the command line

Use a predefined checkup to verify that your OpenShift Container Platform cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application.

You run a DPDK checkup by performing the following steps:

  1. Create a service account, role, and role bindings for the DPDK checkup.

  2. Create a config map to provide the input to run the checkup and to store the results.

  3. Create a job to run the checkup.

  4. Review the results in the config map.

  5. Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.

  6. When you are finished, delete the DPDK checkup resources.

Prerequisites
  • You have installed the OpenShift cli (oc).

  • The cluster is configured to run DPDK applications.

  • The project is configured to run DPDK applications.

Procedure
  1. Create a ServiceAccount, Role, and RoleBinding manifest for the DPDK checkup:

    Example service account, role, and rolebinding manifest file
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: dpdk-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kiagnose-configmap-access
    rules:
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs: [ "get", "update" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kiagnose-configmap-access
    subjects:
      - kind: ServiceAccount
        name: dpdk-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kiagnose-configmap-access
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kubevirt-dpdk-checker
    rules:
      - apiGroups: [ "kubevirt.io" ]
        resources: [ "virtualmachineinstances" ]
        verbs: [ "create", "get", "delete" ]
      - apiGroups: [ "subresources.kubevirt.io" ]
        resources: [ "virtualmachineinstances/console" ]
        verbs: [ "get" ]
      - apiGroups: [ "" ]
        resources: [ "configmaps" ]
        verbs: [ "create", "delete" ]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubevirt-dpdk-checker
    subjects:
      - kind: ServiceAccount
        name: dpdk-checkup-sa
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubevirt-dpdk-checker
  2. Apply the ServiceAccount, Role, and RoleBinding manifest:

    $ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
  3. Create a ConfigMap manifest that contains the input parameters for the checkup:

    Example input config map
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: dpdk-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-dpdk
    data:
      spec.timeout: 10m
      spec.param.networkAttachmentDefinitionName: <network_name> (1)
      spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0 (2)
      spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0" (3)
    1 The name of the NetworkAttachmentDefinition object.
    2 The container disk image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry.
    3 The container disk image for the VM under test. In this example, the image is pulled from the upstream Project Quay Container Registry.
  4. Apply the ConfigMap manifest in the target namespace:

    $ oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
  5. Create a Job manifest to run the checkup:

    Example job manifest
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: dpdk-checkup
      labels:
        kiagnose/checkup-type: kubevirt-dpdk
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccountName: dpdk-checkup-sa
          restartPolicy: Never
          containers:
            - name: dpdk-checkup
              image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.17.0
              imagePullPolicy: Always
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop: ["ALL"]
                runAsNonRoot: true
                seccompProfile:
                  type: "RuntimeDefault"
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: <target-namespace>
                - name: CONFIGMAP_NAME
                  value: dpdk-checkup-config
                - name: POD_UID
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.uid
  6. Apply the Job manifest:

    $ oc apply -n <target_namespace> -f <dpdk_job>.yaml
  7. Wait for the job to complete:

    $ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m
  8. Review the results of the checkup by running the following command:

    $ oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml
    Example output config map (success)
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: dpdk-checkup-config
      labels:
        kiagnose/checkup-type: kubevirt-dpdk
    data:
      spec.timeout: 10m
      spec.param.NetworkAttachmentDefinitionName: "dpdk-network-1"
      spec.param.trafficGenContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0"
      spec.param.vmUnderTestContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0"
      status.succeeded: "true" (1)
      status.failureReason: "" (2)
      status.startTimestamp: "2023-07-31T13:14:38Z" (3)
      status.completionTimestamp: "2023-07-31T13:19:41Z" (4)
      status.result.trafficGenSentPackets: "480000000" (5)
      status.result.trafficGenOutputErrorPackets: "0" (6)
      status.result.trafficGenInputErrorPackets: "0" (7)
      status.result.trafficGenActualNodeName: worker-dpdk1 (8)
      status.result.vmUnderTestActualNodeName: worker-dpdk2 (9)
      status.result.vmUnderTestReceivedPackets: "480000000" (10)
      status.result.vmUnderTestRxDroppedPackets: "0" (11)
      status.result.vmUnderTestTxDroppedPackets: "0" (12)
    1 Specifies if the checkup is successful (true) or not (false).
    2 The reason for failure if the checkup fails.
    3 The time when the checkup started, in RFC 3339 time format.
    4 The time when the checkup has completed, in RFC 3339 time format.
    5 The number of packets sent from the traffic generator.
    6 The number of error packets sent from the traffic generator.
    7 The number of error packets received by the traffic generator.
    8 The node on which the traffic generator VM was scheduled.
    9 The node on which the VM under test was scheduled.
    10 The number of packets received on the VM under test.
    11 The ingress traffic packets that were dropped by the DPDK application.
    12 The egress traffic packets that were dropped from the DPDK application.
  9. Delete the job and config map that you previously created by running the following commands:

    $ oc delete job -n <target_namespace> dpdk-checkup
    $ oc delete config-map -n <target_namespace> dpdk-checkup-config
  10. Optional: If you do not plan to run another checkup, delete the ServiceAccount, Role, and RoleBinding manifest:

    $ oc delete -f <dpdk_sa_roles_rolebinding>.yaml

DPDK checkup config map parameters

The following table shows the mandatory and optional parameters that you can set in the data stanza of the input ConfigMap manifest when you run a cluster DPDK readiness checkup:

Table 1. DPDK checkup config map input parameters
Parameter Description Is Mandatory

spec.timeout

The time, in minutes, before the checkup fails.

True

spec.param.networkAttachmentDefinitionName

The name of the NetworkAttachmentDefinition object of the SR-IOV NICs connected.

True

spec.param.trafficGenContainerDiskImage

The container disk image for the traffic generator.

True

spec.param.trafficGenTargetNodeName

The node on which the traffic generator VM is to be scheduled. The node should be configured to allow DPDK traffic.

False

spec.param.trafficGenPacketsPerSecond

The number of packets per second, in kilo (k) or million(m). The default value is 8m.

False

spec.param.vmUnderTestContainerDiskImage

The container disk image for the VM under test.

True

spec.param.vmUnderTestTargetNodeName

The node on which the VM under test is to be scheduled. The node should be configured to allow DPDK traffic.

False

spec.param.testDuration

The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes.

False

spec.param.portBandwidthGbps

The maximum bandwidth of the SR-IOV NIC. The default value is 10Gbps.

False

spec.param.verbose

When set to true, it increases the verbosity of the checkup log. The default value is false.

False

Building a container disk image for RHEL virtual machines

You can build a custom Red Hat Enterprise Linux (RHEL) 9 OS image in qcow2 format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the spec.param.vmContainerDiskImage attribute of the DPDK checkup config map.

To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a RHEL 9 VM that can be used to build custom RHEL images.

Prerequisites
  • The image builder VM must run RHEL 9.4 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the /var directory.

  • You have installed the image builder tool and its cli (composer-cli) on the VM. For more information, see "Additional resources".

  • You have installed the virt-customize tool:

    # dnf install guestfs-tools
  • You have installed the Podman cli tool (podman).

Procedure
  1. Verify that you can build a RHEL 9.4 image:

    # composer-cli distros list

    To run the composer-cli commands as non-root, add your user to the weldr or root groups:

    # usermod -a -G weldr <user>
    $ newgrp weldr
  2. Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:

    $ cat << EOF > dpdk-vm.toml
    name = "dpdk_image"
    description = "Image to use with the DPDK checkup"
    version = "0.0.1"
    distro = "rhel-9.4"
    
    [[customizations.user]]
    name = "root"
    password = "redhat"
    
    [[packages]]
    name = "dpdk"
    
    [[packages]]
    name = "dpdk-tools"
    
    [[packages]]
    name = "driverctl"
    
    [[packages]]
    name = "tuned-profiles-cpu-partitioning"
    
    [customizations.kernel]
    append = "default_hugepagesz=1GB hugepagesz=1G hugepages=1"
    
    [customizations.services]
    disabled = ["NetworkManager-wait-online", "sshd"]
    EOF
  3. Push the blueprint file to the image builder tool by running the following command:

    # composer-cli blueprints push dpdk-vm.toml
  4. Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.

    # composer-cli compose start dpdk_image qcow2
  5. Wait for the compose process to complete. The compose status must show FINISHED before you can continue to the next step.

    # composer-cli compose status
  6. Enter the following command to download the qcow2 image file by specifying its UUID:

    # composer-cli compose image <UUID>
  7. Create the customization scripts by running the following commands:

    $ cat <<EOF >customize-vm
    #!/bin/bash
    
    # Setup hugepages mount
    mkdir -p /mnt/huge
    echo "hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0" >> /etc/fstab
    
    # Create vfio-noiommu.conf
    echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf
    
    # Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration
    sed -i 's/\(--allow-rpcs=[^"]*\)/\1,guest-exec-status,guest-exec/' /etc/sysconfig/qemu-ga
    
    # Disable Bracketed-paste mode
    echo "set enable-bracketed-paste off" >> /root/.inputrc
    EOF
  8. Use the virt-customize tool to customize the image generated by the image builder tool:

    $ virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel
  9. To create a Dockerfile that contains all the commands to build the container disk image, enter the following command:

    $ cat << EOF > Dockerfile
    FROM scratch
    COPY --chown=107:107 <UUID>-disk.qcow2 /disk/
    EOF

    where:

    <UUID>-disk.qcow2

    Specifies the name of the custom image in qcow2 format.

  10. Build and tag the container by running the following command:

    $ podman build . -t dpdk-rhel:latest
  11. Push the container disk image to a registry that is accessible from your cluster by running the following command:

    $ podman push dpdk-rhel:latest
  12. Provide a link to the container disk image in the spec.param.vmUnderTestContainerDiskImage attribute in the DPDK checkup config map.