apiVersion: v1
kind: Namespace
metadata:
name: openshift-kmm
Learn about the Kernel Module Management (KMM) Operator and how you can use it to deploy out-of-tree kernel modules and device plugins on OpenShift Container Platform clusters.
The Kernel Module Management (KMM) Operator manages, builds, signs, and deploys out-of-tree kernel modules and device plugins on OpenShift Container Platform clusters.
KMM adds a new Module
CRD which describes an out-of-tree kernel module and its associated device plugin.
You can use Module
resources to configure how to load the module, define ModuleLoader
images for kernel versions, and include instructions for building and signing modules for specific kernel versions.
KMM is designed to accommodate multiple kernel versions at once for any kernel module, allowing for seamless node upgrades and reduced application downtime.
As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI or the web console.
The KMM Operator is supported on OpenShift Container Platform 4.12 and later. Installing KMM on version 4.11 does not require specific additional steps. For details on installing KMM on version 4.10 and earlier, see the section "Installing the Kernel Module Management Operator on earlier versions of OpenShift Container Platform".
As a cluster administrator, you can install the Kernel Module Management (KMM) Operator using the OpenShift Container Platform web console.
Log in to the OpenShift Container Platform web console.
Install the Kernel Module Management Operator:
In the OpenShift Container Platform web console, click Operators → OperatorHub.
Select Kernel Module Management Operator from the list of available Operators, and then click Install.
On the Install Operator page, select the Installation mode as A specific namespace on the cluster.
From the Installed Namespace list, select the openshift-kmm
namespace.
Click Install.
To verify that KMM Operator installed successfully:
Navigate to the Operators → Installed Operators page.
Ensure that Kernel Module Management Operator is listed in the openshift-kmm project with a Status of InstallSucceeded.
During installation, an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. |
To troubleshoot issues with Operator installation:
Navigate to the Operators → Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
Navigate to the Workloads → Pods page and check the logs for pods in the openshift-kmm
project.
As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI.
You have a running OpenShift Container Platform cluster.
You installed the OpenShift CLI (oc
).
You are logged into the OpenShift CLI as a user with cluster-admin
privileges.
Install KMM in the openshift-kmm
namespace:
Create the following Namespace
CR and save the YAML file, for example, kmm-namespace.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-kmm
Create the following OperatorGroup
CR and save the YAML file, for example, kmm-op-group.yaml
:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: kernel-module-management
namespace: openshift-kmm
Create the following Subscription
CR and save the YAML file, for example, kmm-sub.yaml
:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: kernel-module-management
namespace: openshift-kmm
spec:
channel: release-1.0
installPlanApproval: Automatic
name: kernel-module-management
source: redhat-operators
sourceNamespace: openshift-marketplace
startingCSV: kernel-module-management.v1.0.0
Create the subscription object by running the following command:
$ oc create -f kmm-sub.yaml
To verify that the Operator deployment is successful, run the following command:
$ oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager
NAME READY UP-TO-DATE AVAILABLE AGE
kmm-operator-controller-manager 1/1 1 1 97s
The Operator is available.
The KMM Operator is supported on OpenShift Container Platform 4.12 and later.
For version 4.10 and earlier, you must create a new SecurityContextConstraint
object and bind it to the Operator’s ServiceAccount
.
As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI.
You have a running OpenShift Container Platform cluster.
You installed the OpenShift CLI (oc
).
You are logged into the OpenShift CLI as a user with cluster-admin
privileges.
Install KMM in the openshift-kmm
namespace:
Create the following Namespace
CR and save the YAML file, for example, kmm-namespace.yaml
file:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-kmm
Create the following SecurityContextConstraint
object and save the YAML file, for example, kmm-security-constraint.yaml
:
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities:
- NET_BIND_SERVICE
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
type: MustRunAs
groups: []
kind: SecurityContextConstraints
metadata:
name: restricted-v2
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- ALL
runAsUser:
type: MustRunAsRange
seLinuxContext:
type: MustRunAs
seccompProfiles:
- runtime/default
supplementalGroups:
type: RunAsAny
users: []
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
Bind the SecurityContextConstraint
object to the Operator’s ServiceAccount
by running the following commands:
$ oc apply -f kmm-security-constraint.yaml
$ oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller-manager -n openshift-kmm
Create the following OperatorGroup
CR and save the YAML file, for example, kmm-op-group.yaml
:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: kernel-module-management
namespace: openshift-kmm
Create the following Subscription
CR and save the YAML file, for example, kmm-sub.yaml
:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: kernel-module-management
namespace: openshift-kmm
spec:
channel: release-1.0
installPlanApproval: Automatic
name: kernel-module-management
source: redhat-operators
sourceNamespace: openshift-marketplace
startingCSV: kernel-module-management.v1.0.0
Create the subscription object by running the following command:
$ oc create -f kmm-sub.yaml
To verify that the Operator deployment is successful, run the following command:
$ oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager
NAME READY UP-TO-DATE AVAILABLE AGE
kmm-operator-controller-manager 1/1 1 1 97s
The Operator is available.
For each Module
resource, Kernel Module Management (KMM) can create a number of DaemonSet
resources:
One ModuleLoader DaemonSet
per compatible kernel version running in the cluster.
One device plugin DaemonSet
, if configured.
The module loader daemon set resources run ModuleLoader images to load kernel modules.
A module loader image is an OCI image that contains the .ko
files and both the modprobe
and sleep
binaries.
When the module loader pod is created, the pod runs modprobe
to insert the specified module into the kernel.
It then enters a sleep state until it is terminated.
When that happens, the ExecPreStop
hook runs modprobe -r
to unload the kernel module.
If the .spec.devicePlugin
attribute is configured in a Module
resource, then KMM creates a device plugin
daemon set in the cluster.
That daemon set targets:
Nodes that match the .spec.selector
of the Module
resource.
Nodes with the kernel module loaded (where the module loader pod is in the Ready
condition).
The Module
custom resource definition (CRD) represents a kernel module that can be loaded on all or select nodes in the cluster, through a module loader image.
A Module
custom resource (CR) specifies one or more kernel versions with which it is compatible, and a node selector.
The compatible versions for a Module
resource are listed under .spec.moduleLoader.container.kernelMappings
.
A kernel mapping can either match a literal
version, or use regexp
to match many of them at the same time.
The reconciliation loop for the Module
resource runs the following steps:
List all nodes matching .spec.selector
.
Build a set of all kernel versions running on those nodes.
For each kernel version:
Go through .spec.moduleLoader.container.kernelMappings
and find the appropriate container image name. If the kernel mapping has build
or sign
defined and the container image does not already exist, run the build, the signing job, or both, as needed.
Create a module loader daemon set with the container image determined in the previous step.
If .spec.devicePlugin
is defined, create a device plugin daemon set using the configuration specified under .spec.devicePlugin.container
.
Run garbage-collect
on:
Existing daemon set resources targeting kernel versions that are not run by any node in the cluster.
Successful build jobs.
Successful signing jobs.
Loading kernel modules is a highly sensitive operation. After they are loaded, kernel modules have all possible permissions to do any kind of operation on the node. |
Kernel Module Management (KMM) creates a privileged workload to load the kernel modules on nodes.
That workload needs ServiceAccounts
allowed to use the privileged
SecurityContextConstraint
(SCC) resource.
The authorization model for that workload depends on the namespace of the Module
resource, as well as its spec.
If the .spec.moduleLoader.serviceAccountName
or .spec.devicePlugin.serviceAccountName
fields are set, they are always used.
If those fields are not set, then:
If the Module
resource is created in the operator’s namespace (openshift-kmm
by default), then KMM uses its default, powerful ServiceAccounts
to run the daemon sets.
If the Module
resource is created in any other namespace, then KMM runs the daemon sets as the namespace’s default
ServiceAccount
. The Module
resource cannot run a privileged workload unless you manually enable it to use the privileged
SCC.
When setting up RBAC permissions, remember that any user or |
To allow any ServiceAccount
to use the privileged
SCC and therefore to run module loader or device plugin pods, use the following command:
$ oc adm policy add-scc-to-user privileged -z "${serviceAccountName}" [ -n "${namespace}" ]
The following is an annotated Module
example:
apiVersion: kmm.sigs.x-k8s.io/v1beta1
kind: Module
metadata:
name: <my_kmod>
spec:
moduleLoader:
container:
modprobe:
moduleName: <my_kmod> (1)
dirName: /opt (2)
firmwarePath: /firmware (3)
parameters: (4)
- param=1
kernelMappings: (5)
- literal: 6.0.15-300.fc37.x86_64
containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64
- regexp: '^.+\fc37\.x86_64$' (6)
containerImage: "some.other.registry/org/<my_kmod>:${KERNEL_FULL_VERSION}"
- regexp: '^.+$' (7)
containerImage: "some.registry/org/<my_kmod>:${KERNEL_FULL_VERSION}"
build:
buildArgs: (8)
- name: ARG_NAME
value: <some_value>
secrets:
- name: <some_kubernetes_secret> (9)
baseImageRegistryTLS: (10)
insecure: false
insecureSkipTLSVerify: false (11)
dockerfileConfigMap: (12)
name: <my_kmod_dockerfile>
sign:
certsecret:
name: <cert_secret> (13)
keysecret:
name: <key_secret> (14)
filesToSign:
- /opt/lib/modules/${KERNEL_FULL_VERSION}/<my_kmod>.ko
registryTLS: (15)
insecure: false (16)
insecureSkipTLSVerify: false
serviceAccountName: <sa_module_loader> (17)
devicePlugin: (18)
container:
image: some.registry/org/device-plugin:latest (19)
env:
- name: MY_DEVICE_PLUGIN_ENV_VAR
value: SOME_VALUE
volumeMounts: (20)
- mountPath: /some/mountPath
name: <device_plugin_volume>
volumes: (21)
- name: <device_plugin_volume>
configMap:
name: <some_configmap>
serviceAccountName: <sa_device_plugin> (22)
imageReposecret: (23)
name: <secret_name>
selector:
node-role.kubernetes.io/worker: ""
1 | Required. |
2 | Optional. |
3 | Optional: Copies /firmware/* into /var/lib/firmware/ on the node. |
4 | Optional. |
5 | At least one kernel item is required. |
6 | For each node running a kernel matching the regular expression, KMM creates a DaemonSet resource running the image specified in containerImage with ${KERNEL_FULL_VERSION} replaced with the kernel version. |
7 | For any other kernel, build the image using the Dockerfile in the my-kmod ConfigMap. |
8 | Optional. |
9 | Optional: A value for some-kubernetes-secret can be obtained from the build environment at /run/secrets/some-kubernetes-secret . |
10 | Optional: Avoid using this parameter. If set to true , the build is allowed to pull the image in the Dockerfile FROM instruction using plain HTTP. |
11 | Optional: Avoid using this parameter. If set to true , the build will skip any TLS server certificate validation when pulling the image in the Dockerfile FROM instruction using plain HTTP. |
12 | Required. |
13 | Required: A secret holding the public secureboot key with the key 'cert'. |
14 | Required: A secret holding the private secureboot key with the key 'key'. |
15 | Optional: Avoid using this parameter. If set to true , KMM will be allowed to check if the container image already exists using plain HTTP. |
16 | Optional: Avoid using this parameter. If set to true , KMM will skip any TLS server certificate validation when checking if the container image already exists. |
17 | Optional. |
18 | Optional. |
19 | Required: If the device plugin section is present. |
20 | Optional. |
21 | Optional. |
22 | Optional. |
23 | Optional: Used to pull module loader and device plugin images. |
Kernel Module Management (KMM) works with purpose-built module loader images. These are standard OCI images that must satisfy the following requirements:
.ko
files must be located in /opt/lib/modules/${KERNEL_VERSION}
.
modprobe
and sleep
binaries must be defined in the $PATH
variable.
If your module loader image contains several kernel modules and if one of the modules depends on another module, it is best practice to run depmod
at the end of the build process to generate dependencies and map files.
You must have a Red Hat subscription to download the |
To generate modules.dep
and .map
files for a specific kernel version, run depmod -b /opt ${KERNEL_VERSION}
.
If you are building your image on OpenShift Container Platform, consider using the Driver Tool Kit (DTK).
For further information, see using an entitled build.
apiVersion: v1
kind: ConfigMap
metadata:
name: kmm-ci-dockerfile
data:
dockerfile: |
ARG DTK_AUTO
FROM ${DTK_AUTO} as builder
ARG KERNEL_VERSION
WORKDIR /usr/src
RUN ["git", "clone", "https://github.com/rh-ecosystem-edge/kernel-module-management.git"]
WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod
RUN KERNEL_SRC_DIR=/lib/modules/${KERNEL_VERSION}/build make all
FROM registry.redhat.io/ubi8/ubi-minimal
ARG KERNEL_VERSION
RUN microdnf install kmod
COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/${KERNEL_VERSION}/
COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/${KERNEL_VERSION}/
RUN depmod -b /opt ${KERNEL_VERSION}
KMM can build module loader images in the cluster. Follow these guidelines:
Provide build instructions using the build
section of a kernel mapping.
Copy the Dockerfile
for your container image into a ConfigMap
resource, under the dockerfile
key.
Ensure that the ConfigMap
is located in the same namespace as the Module
.
KMM checks if the image name specified in the containerImage
field exists. If it does, the build is skipped.
Otherwise, KMM creates a Build
resource to build your image. After the image is built, KMM proceeds with the Module
reconciliation. See the following example.
# ...
- regexp: '^.+$'
containerImage: "some.registry/org/<my_kmod>:${KERNEL_FULL_VERSION}"
build:
buildArgs: (1)
- name: ARG_NAME
value: <some_value>
secrets: (2)
- name: <some_kubernetes_secret> (3)
baseImageRegistryTLS:
insecure: false (4)
insecureSkipTLSVerify: false (5)
dockerfileConfigMap: (6)
name: <my_kmod_dockerfile>
registryTLS:
insecure: false (7)
insecureSkipTLSVerify: false (8)
1 | Optional. |
2 | Optional. |
3 | Will be mounted in the build pod as /run/secrets/some-kubernetes-secret . |
4 | Optional: Avoid using this parameter. If set to true , the build will be allowed to pull the image in the Dockerfile FROM instruction using plain HTTP. |
5 | Optional: Avoid using this parameter. If set to true , the build will skip any TLS server certificate validation when pulling the image in the Dockerfile FROM instruction using plain HTTP. |
6 | Required. |
7 | Optional: Avoid using this parameter. If set to true , KMM will be allowed to check if the container image already exists using plain HTTP. |
8 | Optional: Avoid using this parameter. If set to true , KMM will skip any TLS server certificate validation when checking if the container image already exists. |
The Driver Toolkit (DTK) is a convenient base image for building build module loader images. It contains tools and libraries for the OpenShift version currently running in the cluster.
Use DTK as the first stage of a multi-stage Dockerfile
.
Build the kernel modules.
Copy the .ko
files into a smaller end-user image such as ubi-minimal
.
To leverage DTK in your in-cluster build, use the DTK_AUTO
build argument.
The value is automatically set by KMM when creating the Build
resource. See the following example.
ARG DTK_AUTO
FROM ${DTK_AUTO} as builder
ARG KERNEL_VERSION
WORKDIR /usr/src
RUN ["git", "clone", "https://github.com/rh-ecosystem-edge/kernel-module-management.git"]
WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod
RUN KERNEL_SRC_DIR=/lib/modules/${KERNEL_VERSION}/build make all
FROM registry.redhat.io/ubi8/ubi-minimal
ARG KERNEL_VERSION
RUN microdnf install kmod
COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/${KERNEL_VERSION}/
COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/${KERNEL_VERSION}/
RUN depmod -b /opt ${KERNEL_VERSION}
On a Secure Boot enabled system, all kernel modules (kmods) must be signed with a public/private key-pair enrolled into the Machine Owner’s Key (MOK) database. Drivers distributed as part of a distribution should already be signed by the distribution’s private key, but for kernel modules build out-of-tree, KMM supports signing kernel modules using the sign
section of the kernel mapping.
For more details on using Secure Boot, see Generating a public and private key pair
A public private key pair in the correct (DER) format.
At least one secure-boot enabled node with the public key enrolled in its MOK database.
Either a pre-built driver container image, or the source code and Dockerfile
needed to build one in-cluster.
To use KMM Kernel Module Management (KMM) to sign kernel modules, a certificate and private key are required. For details on how to create these, see Generating a public and private key pair.
For details on how to extract the public and private key pair, see Signing kernel modules with the private key. Use steps 1 through 4 to extract the keys into files.
Create the sb_cert.cer
file that contains the certificate and the sb_cert.priv
file that contains the private key:
$ openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv
Add the files by using one of the following methods:
Add the files as secrets directly:
$ oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv>
$ oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der>
Add the files by base64 encoding them:
$ cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64
$ cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64
Add the encoded text to a YAML file:
apiVersion: v1
kind: secret
metadata:
name: my-signing-key-pub
namespace: default (1)
type: Opaque
data:
cert: <base64_encoded_secureboot_public_key>
---
apiVersion: v1
kind: secret
metadata:
name: my-signing-key
namespace: default (1)
type: Opaque
data:
key: <base64_encoded_secureboot_private_key>
1 | namespace - Replace default with a valid namespace. |
Apply the YAML file:
$ oc apply -f <yaml_filename>
After you have added the keys, you must check them to ensure they are set correctly.
Check to ensure the public key secret is set correctly:
$ oc get secret -o yaml <certificate secret name> | awk '/cert/{print $2; exit}' | base64 -d | openssl x509 -inform der -text
This should display a certificate with a Serial Number, Issuer, Subject, and more.
Check to ensure the private key secret is set correctly:
$ oc get secret -o yaml <private key secret name> | awk '/key/{print $2; exit}' | base64 -d
This should display the key enclosed in the -----BEGIN PRIVATE KEY-----
and -----END PRIVATE KEY-----
lines.
Use this procedure if you have a pre-built image, such as an image either distributed by a hardware vendor or built elsewhere.
The following YAML file adds the public/private key-pair as secrets with the required key names - key
for the private key, cert
for the public key. The cluster then pulls down the unsignedImage
image, opens it, signs the kernel modules listed in filesToSign
, adds them back, and pushes the resulting image as containerImage
.
Kernel Module Management (KMM) should then deploy the DaemonSet that loads the signed kmods onto all the nodes that match the selector. The driver containers should run successfully on any nodes that have the public key in their MOK database, and any nodes that are not secure-boot enabled, which ignore the signature. They should fail to load on any that have secure-boot enabled but do not have that key in their MOK database.
The keysecret
and certsecret
secrets have been created.
Apply the YAML file:
---
apiVersion: kmm.sigs.x-k8s.io/v1beta1
kind: Module
metadata:
name: example-module
spec:
moduleLoader:
serviceAccountName: default
container:
modprobe: (1)
moduleName: '<your module name>'
kernelMappings:
# the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp
- regexp: '^.*\.x86_64$'
# the container to produce containing the signed kmods
containerImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion>-signed>
sign:
# the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster)
unsignedImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion> >
keysecret: # a secret holding the private secureboot key with the key 'key'
name: <private key secret name>
certsecret: # a secret holding the public secureboot key with the key 'cert'
name: <certificate secret name>
filesToSign: # full path within the unsignedImage container to the kmod(s) to sign
- /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko
imageReposecret:
# the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry
name: repo-pull-secret
selector:
kubernetes.io/arch: amd64
1 | modprobe - The name of the kmod to load. |
Use this procedure if you have source code and must build your image first.
The following YAML file builds a new container image using the source code from the repository. The image produced is saved back in the registry with a temporary name, and this temporary image is then signed using the parameters in the sign
section.
The temporary image name is based on the final image name and is set to be <containerImage>:<tag>-<namespace>_<module name>_kmm_unsigned
.
For example, using the following YAML file, Kernel Module Management (KMM) builds an image named example.org/repository/minimal-driver:final-default_example-module_kmm_unsigned
containing the build with unsigned kmods and push it to the registry. Then it creates a second image named example.org/repository/minimal-driver:final
that contains the signed kmods. It is this second image that is loaded by the DaemonSet
object and deploys the kmods to the cluster nodes.
After it is signed, the temporary image can be safely deleted from the registry. It will be rebuilt, if needed.
The keysecret
and certsecret
secrets have been created.
Apply the YAML file:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: example-module-dockerfile
namespace: default (1)
data:
Dockerfile: |
ARG DTK_AUTO
ARG KERNEL_VERSION
FROM ${DTK_AUTO} as builder
WORKDIR /build/
RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git
WORKDIR kernel-module-management/ci/kmm-kmod/
RUN make
FROM registry.access.redhat.com/ubi8/ubi:latest
ARG KERNEL_VERSION
RUN yum -y install kmod && yum clean all
RUN mkdir -p /opt/lib/modules/${KERNEL_VERSION}
COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/${KERNEL_VERSION}/
RUN /usr/sbin/depmod -b /opt
---
apiVersion: kmm.sigs.x-k8s.io/v1beta1
kind: Module
metadata:
name: example-module
namespace: default (1)
spec:
moduleLoader:
serviceAccountName: default (2)
container:
modprobe:
moduleName: simple_kmod
kernelMappings:
- regexp: '^.*\.x86_64$'
containerImage: < the name of the final driver container to produce>
build:
dockerfileConfigMap:
name: example-module-dockerfile
sign:
keysecret:
name: <private key secret name>
certsecret:
name: <certificate secret name>
filesToSign:
- /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko
imageReposecret: (3)
name: repo-pull-secret
selector: # top-level selector
kubernetes.io/arch: amd64
1 | namespace - Replace default with a valid namespace. |
2 | serviceAccountName - The default serviceAccountName does not have the required permissions to run a module that is privileged. For information on creating a service account, see "Creating service accounts" in the "Additional resources" of this section. |
3 | imageReposecret - Used as imagePullsecrets in the DaemonSet object and to pull and push for the build and sign features. |
For information on creating a service account, see Creating service accounts.
If the kmods in your driver container are not signed or are signed with the wrong key, then the container can enter a PostStartHookError
or CrashLoopBackOff
status. You can verify by running the oc describe
command on your container, which displays the following message in this scenario:
modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available
Kernel modules sometimes need to load firmware files from the file system. KMM supports copying firmware files from the ModuleLoader image to the node’s file system.
The contents of .spec.moduleLoader.container.modprobe.firmwarePath
are copied into the /var/lib/firmware
path on the node before running the modprobe
command to insert the kernel module.
All files and empty directories are removed from that location before running the modprobe -r
command to unload the kernel module, when the pod is terminated.
On OpenShift Container Platform nodes, the set of default lookup paths for firmwares does not include the /var/lib/firmware
path.
Use the Machine Config Operator to create a MachineConfig
custom resource (CR) that contains the /var/lib/firmware
path:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker (1)
name: 99-worker-kernel-args-firmware-path
spec:
kernelArguments:
- 'firmware_class.path=/var/lib/firmware'
1 | You can configure the label based on your needs. In the case of single-node OpenShift, use either control-pane or master objects. |
By applying the MachineConfig
CR, the nodes are automatically rebooted.
In addition to building the kernel module itself, include the binary firmware in the builder image:
FROM registry.redhat.io/ubi8/ubi-minimal as builder
# Build the kmod
RUN ["mkdir", "/firmware"]
RUN ["curl", "-o", "/firmware/firmware.bin", "https://artifacts.example.com/firmware.bin"]
FROM registry.redhat.io/ubi8/ubi-minimal
# Copy the kmod, install modprobe, run depmod
COPY --from=builder /firmware /firmware
Set .spec.moduleLoader.container.modprobe.firmwarePath
in the Module
custom resource (CR):
apiVersion: kmm.sigs.x-k8s.io/v1beta1
kind: Module
metadata:
name: my-kmod
spec:
moduleLoader:
container:
modprobe:
moduleName: my-kmod # Required
firmwarePath: /firmware (1)
1 | Optional: Copies /firmware/* into /var/lib/firmware/ on the node. |
When troubleshooting KMM installation issues, you can monitor logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage.
The oc adm must-gather
command is the preferred way to collect a support bundle and provide debugging information to Red Hat
Support. Collect specific information by running the command with the appropriate arguments as described in the following sections.
Gather the data for the KMM Operator controller manager:
Set the MUST_GATHER_IMAGE
variable:
$ export MUST_GATHER_IMAGE=$(oc get deployment -n openshift-kmm kmm-operator-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name=="manager")].env[?(@.name=="RELATED_IMAGES_MUST_GATHER")].value}')
Use the |
Run the must-gather
tool:
$ oc adm must-gather --image="${MUST_GATHER_IMAGE}" -- /usr/bin/gather
View the Operator logs:
$ oc logs -fn openshift-kmm deployments/kmm-operator-controller-manager
I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s
I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"="127.0.0.1:8080"
I0228 09:36:40.769483 1 main.go:234] kmm/setup "msg"="starting manager"
I0228 09:36:40.769907 1 internal.go:366] kmm "msg"="Starting server" "addr"={"IP":"127.0.0.1","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics"
I0228 09:36:40.770025 1 internal.go:366] kmm "msg"="Starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe"
I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io...
I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io
I0228 09:36:40.784876 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1beta1.Module"
I0228 09:36:40.784925 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.DaemonSet"
I0228 09:36:40.784968 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Build"
I0228 09:36:40.785001 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Job"
I0228 09:36:40.785025 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Node"
I0228 09:36:40.785039 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module"
I0228 09:36:40.785458 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PodNodeModule" "controllerGroup"="" "controllerKind"="Pod" "source"="kind source: *v1.Pod"
I0228 09:36:40.786947 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1beta1.PreflightValidation"
I0228 09:36:40.787406 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1.Build"
I0228 09:36:40.787474 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1.Job"
I0228 09:36:40.787488 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1beta1.Module"
I0228 09:36:40.787603 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="NodeKernel" "controllerGroup"="" "controllerKind"="Node" "source"="kind source: *v1.Node"
I0228 09:36:40.787634 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="NodeKernel" "controllerGroup"="" "controllerKind"="Node"
I0228 09:36:40.787680 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation"
I0228 09:36:40.785607 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "source"="kind source: *v1.ImageStream"
I0228 09:36:40.787822 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" "source"="kind source: *v1beta1.PreflightValidationOCP"
I0228 09:36:40.787853 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream"
I0228 09:36:40.787879 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" "source"="kind source: *v1beta1.PreflightValidation"
I0228 09:36:40.787905 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP"
I0228 09:36:40.786489 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="PodNodeModule" "controllerGroup"="" "controllerKind"="Pod"
Gather the data for the KMM Operator hub controller manager:
Set the MUST_GATHER_IMAGE
variable:
$ export MUST_GATHER_IMAGE=$(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name=="manager")].env[?(@.name=="RELATED_IMAGES_MUST_GATHER")].value}')
Use the |
Run the must-gather
tool:
$ oc adm must-gather --image="${MUST_GATHER_IMAGE}" -- /usr/bin/gather -u
View the Operator logs:
$ oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller-manager
I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s
I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"="127.0.0.1:8080"
I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup "msg"="Adding controller" "name"="ManagedClusterModule"
I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup "msg"="starting manager"
I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io...
I0417 11:34:12.378078 1 internal.go:366] kmm-hub "msg"="Starting server" "addr"={"IP":"127.0.0.1","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics"
I0417 11:34:12.378222 1 internal.go:366] kmm-hub "msg"="Starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe"
I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io
I0417 11:34:12.396334 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1beta1.ManagedClusterModule"
I0417 11:34:12.396403 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.ManifestWork"
I0417 11:34:12.396430 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.Build"
I0417 11:34:12.396469 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.Job"
I0417 11:34:12.396522 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.ManagedCluster"
I0417 11:34:12.396543 1 controller.go:193] kmm-hub "msg"="Starting Controller" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule"
I0417 11:34:12.397175 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "source"="kind source: *v1.ImageStream"
I0417 11:34:12.397221 1 controller.go:193] kmm-hub "msg"="Starting Controller" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream"
I0417 11:34:12.498335 1 filter.go:196] kmm-hub "msg"="Listing all ManagedClusterModules" "managedcluster"="local-cluster"
I0417 11:34:12.498570 1 filter.go:205] kmm-hub "msg"="Listed ManagedClusterModules" "count"=0 "managedcluster"="local-cluster"
I0417 11:34:12.498629 1 filter.go:238] kmm-hub "msg"="Adding reconciliation requests" "count"=0 "managedcluster"="local-cluster"
I0417 11:34:12.498687 1 filter.go:196] kmm-hub "msg"="Listing all ManagedClusterModules" "managedcluster"="sno1-0"
I0417 11:34:12.498750 1 filter.go:205] kmm-hub "msg"="Listed ManagedClusterModules" "count"=0 "managedcluster"="sno1-0"
I0417 11:34:12.498801 1 filter.go:238] kmm-hub "msg"="Adding reconciliation requests" "count"=0 "managedcluster"="sno1-0"
I0417 11:34:12.501947 1 controller.go:227] kmm-hub "msg"="Starting workers" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "worker count"=1
I0417 11:34:12.501948 1 controller.go:227] kmm-hub "msg"="Starting workers" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "worker count"=1
I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub "msg"="registered imagestream info mapping" "ImageStream"={"name":"driver-toolkit","namespace":"openshift"} "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "dtkImage"="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934" "name"="driver-toolkit" "namespace"="openshift" "osImageVersion"="412.86.202302211547-0" "reconcileID"="e709ff0a-5664-4007-8270-49b5dff8bae9"
In hub and spoke scenarios, many spoke clusters are connected to a central, powerful hub cluster. Kernel Module Management (KMM) depends on Red Hat Advanced Cluster Management (RHACM) to operate in hub and spoke environments.
KMM is compatible with hub and spoke environments through decoupling KMM features. A ManagedClusterModule
Custom Resource Definition (CRD) is provided to wrap the existing Module
CRD and extend it to select Spoke clusters. Also provided is KMM-Hub, a new standalone controller that builds images and signs modules on the hub cluster.
In hub and spoke setups, spokes are focused, resource-constrained clusters that are centrally managed by a hub cluster. Spokes run the single-cluster edition of KMM, with those resource-intensive features disabled. To adapt KMM to this environment, you should reduce the workload running on the spokes to the minimum, while the hub takes care of the expensive tasks.
Building kernel module images and signing the .ko
files, should run on the hub. The scheduling of the Module Loader and Device Plugin DaemonSets
can only happen on the spokes.
The KMM project provides KMM-Hub, an edition of KMM dedicated to hub clusters. KMM-Hub monitors all kernel versions running on the spokes and determines the nodes on the cluster that should receive a kernel module.
KMM-Hub runs all compute-intensive tasks such as image builds and kmod signing, and prepares the trimmed-down Module
to be transferred to the spokes through RHACM.
KMM-Hub cannot be used to load kernel modules on the hub cluster. Install the regular edition of KMM to load kernel modules. |
You can use one of the following methods to install KMM-Hub:
Using the Operator Lifecycle Manager (OLM)
Creating KMM resources
Use the Operators section of the OpenShift console to install KMM-Hub.
If you want to install KMM-Hub programmatically, you can use the following resources to create
the Namespace
, OperatorGroup
and Subscription
resources:
---
apiVersion: v1
kind: Namespace
metadata:
name: openshift-kmm-hub
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: kernel-module-management-hub
namespace: openshift-kmm-hub
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: kernel-module-management-hub
namespace: openshift-kmm-hub
spec:
channel: stable
installPlanApproval: Automatic
name: kernel-module-management-hub
source: redhat-operators
sourceNamespace: openshift-marketplace
ManagedClusterModule
CRDUse the ManagedClusterModule
Custom Resource Definition (CRD) to configure the deployment of kernel modules on spoke clusters.
This CRD is cluster-scoped, wraps a Module
spec and adds the following additional fields:
apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1
kind: ManagedClusterModule
metadata:
name: <my-mcm>
# No namespace, because this resource is cluster-scoped.
spec:
moduleSpec: (1)
selector: (2)
node-wants-my-mcm: 'true'
spokeNamespace: <some-namespace> (3)
selector: (4)
wants-my-mcm: 'true'
1 | moduleSpec : Contains moduleLoader and devicePlugin sections, similar to a Module resource. |
2 | Selects nodes within the ManagedCluster . |
3 | Specifies in which namespace the Module should be created. |
4 | Selects ManagedCluster objects. |
If build or signing instructions are present in .spec.moduleSpec
, those pods are run on the hub cluster in the operator’s namespace.
When the .spec.selector matches
one or more ManagedCluster
resources, then KMM-Hub creates a ManifestWork
resource in the corresponding namespace(s). ManifestWork
contains a trimmed-down Module
resource, with kernel mappings preserved but all build
and sign
subsections are removed. containerImage
fields that contain image names ending with a tag are replaced with their digest equivalent.
After installing KMM on the spoke, no further action is required. Create a ManagedClusterModule
object from the hub to deploy kernel modules on spoke clusters.
You can install KMM on the spokes cluster through a RHACM Policy
object.
In addition to installing KMM from the Operator hub and running it in a lightweight spoke mode,
the Policy
configures additional RBAC required for the RHACM agent to be able to manage Module
resources.
Use the following RHACM policy to install KMM on spoke clusters:
---
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: install-kmm
spec:
remediationAction: enforce
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: install-kmm
spec:
severity: high
object-templates:
- complianceType: mustonlyhave
objectDefinition:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-kmm
- complianceType: mustonlyhave
objectDefinition:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: kmm
namespace: openshift-kmm
spec:
upgradeStrategy: Default
- complianceType: mustonlyhave
objectDefinition:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: kernel-module-management
namespace: openshift-kmm
spec:
channel: stable
config:
env:
- name: KMM_MANAGED
value: "1"
installPlanApproval: Automatic
name: kernel-module-management
source: redhat-operators
sourceNamespace: openshift-marketplace
- complianceType: mustonlyhave
objectDefinition:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kmm-module-manager
rules:
- apiGroups: [kmm.sigs.x-k8s.io]
resources: [modules]
verbs: [create, delete, get, list, patch, update, watch]
- complianceType: mustonlyhave
objectDefinition:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: klusterlet-kmm
subjects:
- kind: ServiceAccount
name: klusterlet-work-sa
namespace: open-cluster-management-agent
roleRef:
kind: ClusterRole
name: kmm-module-manager
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: all-managed-clusters
spec:
clusterSelector: (1)
matchExpressions: []
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: install-kmm
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
name: all-managed-clusters
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: install-kmm
1 | The spec.clusterSelector field can be customized to target select clusters only. |