This is a cache of https://docs.okd.io/latest/machine_configuration/index.html. It is a snapshot of the page at 2025-02-04T18:51:25.569+0000.
Machine configuration overview | Machine configuration | OKD 4
×

There are times when you need to make changes to the operating systems running on OKD nodes. This can include changing settings for network time service, adding kernel arguments, or configuring journaling in a specific way.

Aside from a few specialized features, most changes to operating systems on OKD nodes can be done by creating what are referred to as MachineConfig objects that are managed by the Machine Config Operator. For example, you can use the Machine Config Operator (MCO) and machine configs to manage update to systemd, CRI-O and kubelet, the kernel, Network Manager and other system features.

Tasks in this section describe how to use features of the Machine Config Operator to configure operating system features on OKD nodes.

NetworkManager stores new network configurations to /etc/NetworkManager/system-connections/ in a key file format.

Previously, NetworkManager stored new network configurations to /etc/sysconfig/network-scripts/ in the ifcfg format. Starting with RHEL 9.0, RHEL stores new network configurations at /etc/NetworkManager/system-connections/ in a key file format. The connections configurations stored to /etc/sysconfig/network-scripts/ in the old format still work uninterrupted. Modifications in existing profiles continue updating the older files.

About the Machine Config Operator

OKD 4 integrates both operating system and cluster management. Because the cluster manages its own updates, including updates to Fedora CoreOS (FCOS) on cluster nodes, OKD provides an opinionated lifecycle management experience that simplifies the orchestration of node upgrades.

OKD employs three daemon sets and controllers to simplify node management. These daemon sets orchestrate operating system updates and configuration changes to the hosts by using standard Kubernetes-style constructs. They include:

  • The machine-config-controller, which coordinates machine upgrades from the control plane. It monitors all of the cluster nodes and orchestrates their configuration updates.

  • The machine-config-daemon daemon set, which runs on each node in the cluster and updates a machine to configuration as defined by machine config and as instructed by the MachineConfigController. When the node detects a change, it drains off its pods, applies the update, and reboots. These changes come in the form of Ignition configuration files that apply the specified machine configuration and control kubelet configuration. The update itself is delivered in a container. This process is key to the success of managing OKD and FCOS updates together.

  • The machine-config-server daemon set, which provides the Ignition config files to control plane nodes as they join the cluster.

The machine configuration is a subset of the Ignition configuration. The machine-config-daemon reads the machine configuration to see if it needs to do an OSTree update or if it must apply a series of systemd kubelet file changes, configuration changes, or other changes to the operating system or OKD configuration.

When you perform node management operations, you create or modify a KubeletConfig custom resource (CR).

When changes are made to a machine configuration, the Machine Config Operator (MCO) automatically reboots all corresponding nodes in order for the changes to take effect.

You can mitigate the disruption caused by some machine config changes by using a node disruption policy. See Understanding node restart behaviors after machine config changes.

Alternatively, you can prevent the nodes from automatically rebooting after machine configuration changes before making the changes. Pause the autoreboot process by setting the spec.paused field to true in the corresponding machine config pool. When paused, machine configuration changes are not applied until you set the spec.paused field to false and the nodes have rebooted into the new configuration.

The following modifications do not trigger a node reboot:

  • When the MCO detects any of the following changes, it applies the update without draining or rebooting the node:

    • Changes to the SSH key in the spec.config.passwd.users.sshAuthorizedKeys parameter of a machine config.

    • Changes to the global pull secret or pull secret in the openshift-config namespace.

    • Automatic rotation of the /etc/kubernetes/kubelet-ca.crt certificate authority (CA) by the Kubernetes API Server Operator.

  • When the MCO detects changes to the /etc/containers/registries.conf file, such as adding or editing an ImageDigestMirrorSet, ImageTagMirrorSet, or ImageContentSourcePolicy object, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes:

    • The addition of a registry with the pull-from-mirror = "digest-only" parameter set for each mirror.

    • The addition of a mirror with the pull-from-mirror = "digest-only" parameter set in a registry.

    • The addition of items to the unqualified-search-registries list.

There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated.

Machine config overview

The Machine Config Operator (MCO) manages updates to systemd, CRI-O and Kubelet, the kernel, Network Manager and other system features. It also offers a MachineConfig CRD that can write configuration files onto the host (see machine-config-operator). Understanding what MCO does and how it interacts with other components is critical to making advanced, system-level changes to an OKD cluster. Here are some things you should know about MCO, machine configs, and how they are used:

  • Machine configs are processed alphabetically, in lexicographically increasing order, of their name. The render controller uses the first machine config in the list as the base and appends the rest to the base machine config.

  • A machine config can make a specific change to a file or service on the operating system of each system representing a pool of OKD nodes.

  • MCO applies changes to operating systems in pools of machines. All OKD clusters start with worker and control plane node pools. By adding more role labels, you can configure custom pools of nodes. For example, you can set up a custom pool of worker nodes that includes particular hardware features needed by an application. However, examples in this section focus on changes to the default pool types.

    A node can have multiple labels applied that indicate its type, such as master or worker, however it can be a member of only a single machine config pool.

  • After a machine config change, the MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are upgraded by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time.

  • Some machine configuration must be in place before OKD is installed to disk. In most cases, this can be accomplished by creating a machine config that is injected directly into the OKD installer process, instead of running as a postinstallation machine config. In other cases, you might need to do bare metal installation where you pass kernel arguments at OKD installer startup, to do such things as setting per-node individual IP addresses or advanced disk partitioning.

  • MCO manages items that are set in machine configs. Manual changes you do to your systems will not be overwritten by MCO, unless MCO is explicitly told to manage a conflicting file. In other words, MCO only makes specific updates you request, it does not claim control over the whole node.

  • Manual changes to nodes are strongly discouraged. If you need to decommission a node and start a new one, those direct changes would be lost.

  • MCO is only supported for writing to files in /etc and /var directories, although there are symbolic links to some directories that can be writeable by being symbolically linked to one of those areas. The /opt and /usr/local directories are examples.

  • Ignition is the configuration format used in MachineConfigs. See the Ignition Configuration Specification v3.2.0 for details.

  • Although Ignition config settings can be delivered directly at OKD installation time, and are formatted in the same way that MCO delivers Ignition configs, MCO has no way of seeing what those original Ignition configs are. Therefore, you should wrap Ignition config settings into a machine config before deploying them.

  • When a file managed by MCO changes outside of MCO, the Machine Config Daemon (MCD) sets the node as degraded. It will not overwrite the offending file, however, and should continue to operate in a degraded state.

  • A key reason for using a machine config is that it will be applied when you spin up new nodes for a pool in your OKD cluster. The machine-api-operator provisions a new machine and MCO configures it.

MCO uses Ignition as the configuration format. OKD 4.6 moved from Ignition config specification version 2 to version 3.

What can you change with machine configs?

The kinds of components that MCO can change include:

  • config: Create Ignition config objects (see the Ignition configuration specification) to do things like modify files, systemd services, and other features on OKD machines, including:

    • Configuration files: Create or overwrite files in the /var or /etc directory.

    • systemd units: Create and set the status of a systemd service or add to an existing systemd service by dropping in additional settings.

    • users and groups: Change SSH keys in the passwd section postinstallation.

      • Changing SSH keys by using a machine config is supported only for the core user.

      • Adding new users by using a machine config is not supported.

  • kernelArguments: Add arguments to the kernel command line when OKD nodes boot.

  • kernelType: Optionally identify a non-standard kernel to use instead of the standard kernel. Use realtime to use the RT kernel (for RAN). This is only supported on select platforms. Use the 64k-pages parameter to enable the 64k page size kernel. This setting is exclusive to machines with 64-bit ARM architectures.

  • extensions: Extend FCOS features by adding selected pre-packaged software. For this feature, available extensions include usbguard and kernel modules.

  • Custom resources (for ContainerRuntime and Kubelet): Outside of machine configs, MCO manages two special custom resources for modifying CRI-O container runtime settings (ContainerRuntime CR) and the Kubelet service (Kubelet CR).

The MCO is not the only Operator that can change operating system components on OKD nodes. Other Operators can modify operating system-level features as well. One example is the Node Tuning Operator, which allows you to do node-level tuning through Tuned daemon profiles.

Tasks for the MCO configuration that can be done after installation are included in the following procedures. See descriptions of FCOS bare metal installation for system configuration tasks that must be done during or before OKD installation. By default, many of the changes you make with the MCO require a reboot.

The following modifications do not trigger a node reboot:

  • When the MCO detects any of the following changes, it applies the update without draining or rebooting the node:

    • Changes to the SSH key in the spec.config.passwd.users.sshAuthorizedKeys parameter of a machine config.

    • Changes to the global pull secret or pull secret in the openshift-config namespace.

    • Automatic rotation of the /etc/kubernetes/kubelet-ca.crt certificate authority (CA) by the Kubernetes API Server Operator.

  • When the MCO detects changes to the /etc/containers/registries.conf file, such as adding or editing an ImageDigestMirrorSet, ImageTagMirrorSet, or ImageContentSourcePolicy object, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes:

    • The addition of a registry with the pull-from-mirror = "digest-only" parameter set for each mirror.

    • The addition of a mirror with the pull-from-mirror = "digest-only" parameter set in a registry.

    • The addition of items to the unqualified-search-registries list.

In other cases, you can mitigate the disruption to your workload when the MCO makes changes by using node disruption policies. For information, see Understanding node restart behaviors after machine config changes.

There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. For more information on configuration drift, see Understanding configuration drift detection.

Node configuration management with machine config pools

Machines that run control plane components or user workloads are divided into groups based on the types of resources they handle. These groups of machines are called machine config pools (MCP). Each MCP manages a set of nodes and its corresponding machine configs. The role of the node determines which MCP it belongs to; the MCP governs nodes based on its assigned node role label. Nodes in an MCP have the same configuration; this means nodes can be scaled up and torn down in response to increased or decreased workloads.

By default, there are two MCPs created by the cluster when it is installed: master and worker. Each default MCP has a defined configuration applied by the Machine Config Operator (MCO), which is responsible for managing MCPs and facilitating MCP updates.

For worker nodes, you can create additional MCPs, or custom pools, to manage nodes with custom use cases that extend outside of the default node types. Custom MCPs for the control plane nodes are not supported.

Custom pools are pools that inherit their configurations from the worker pool. They use any machine config targeted for the worker pool, but add the ability to deploy changes only targeted at the custom pool. Since a custom pool inherits its configuration from the worker pool, any change to the worker pool is applied to the custom pool as well. Custom pools that do not inherit their configurations from the worker pool are not supported by the MCO.

A node can only be included in one MCP. If a node has multiple labels that correspond to several MCPs, like worker,infra, it is managed by the infra custom pool, not the worker pool. Custom pools take priority on selecting nodes to manage based on node labels; nodes that do not belong to a custom pool are managed by the worker pool.

It is recommended to have a custom pool for every node role you want to manage in your cluster. For example, if you create infra nodes to handle infra workloads, it is recommended to create a custom infra MCP to group those nodes together. If you apply an infra role label to a worker node so it has the worker,infra dual label, but do not have a custom infra MCP, the MCO considers it a worker node. If you remove the worker label from a node and apply the infra label without grouping it in a custom pool, the node is not recognized by the MCO and is unmanaged by the cluster.

Any node labeled with the infra role that is only running infra workloads is not counted toward the total number of subscriptions. The MCP managing an infra node is mutually exclusive from how the cluster determines subscription charges; tagging a node with the appropriate infra role and using taints to prevent user workloads from being scheduled on that node are the only requirements for avoiding subscription charges for infra workloads.

The MCO applies updates for pools independently; for example, if there is an update that affects all pools, nodes from each pool update in parallel with each other. If you add a custom pool, nodes from that pool also attempt to update concurrently with the master and worker nodes.

There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift. The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated.

Understanding the Machine Config Operator node drain behavior

When you use a machine config to change a system feature, such as adding new config files, modifying systemd units or kernel arguments, or updating SSH keys, the Machine Config Operator (MCO) applies those changes and ensures that each node is in the desired configuration state.

After you make the changes, the MCO generates a new rendered machine config. In the majority of cases, when applying the new rendered machine config, the Operator performs the following steps on each affected node until all of the affected nodes have the updated configuration:

  1. Cordon. The MCO marks the node as not schedulable for additional workloads.

  2. Drain. The MCO terminates all running workloads on the node, causing the workloads to be rescheduled onto other nodes.

  3. Apply. The MCO writes the new configuration to the nodes as needed.

  4. Reboot. The MCO restarts the node.

  5. Uncordon. The MCO marks the node as schedulable for workloads.

Throughout this process, the MCO maintains the required number of pods based on the MaxUnavailable value set in the machine config pool.

There are conditions which can prevent the MCO from draining a node. If the MCO fails to drain a node, the Operator will be unable to reboot the node, preventing any changes made to the node through a machine config. For more information and mitigation steps, see the MCCDrainError runbook.

If the MCO drains pods on the master node, note the following conditions:

  • In single-node OpenShift clusters, the MCO skips the drain operation.

  • The MCO does not drain static pods in order to prevent interference with services, such as etcd.

In certain cases the nodes are not drained. For more information, see "About the Machine Config Operator."

There are ways to mitigate the disruption caused by drain and reboot cycles by using node disruption policies or disabling control plane reboots. For more information, see "Understanding node restart behaviors after machine config changes" and "Disabling the Machine Config Operator from automatically rebooting."

Understanding configuration drift detection

There might be situations when the on-disk state of a node differs from what is configured in the machine config. This is known as configuration drift. For example, a cluster admin might manually modify a file, a systemd unit file, or a file permission that was configured through a machine config. This causes configuration drift. Configuration drift can cause problems between nodes in a Machine Config Pool or when the machine configs are updated.

The Machine Config Operator (MCO) uses the Machine Config Daemon (MCD) to check nodes for configuration drift on a regular basis. If detected, the MCO sets the node and the machine config pool (MCP) to Degraded and reports the error. A degraded node is online and operational, but, it cannot be updated.

The MCD performs configuration drift detection upon each of the following conditions:

  • When a node boots.

  • After any of the files (Ignition files and systemd drop-in units) specified in the machine config are modified outside of the machine config.

  • Before a new machine config is applied.

    If you apply a new machine config to the nodes, the MCD temporarily shuts down configuration drift detection. This shutdown is needed because the new machine config necessarily differs from the machine config on the nodes. After the new machine config is applied, the MCD restarts detecting configuration drift using the new machine config.

When performing configuration drift detection, the MCD validates that the file contents and permissions fully match what the currently-applied machine config specifies. Typically, the MCD detects configuration drift in less than a second after the detection is triggered.

If the MCD detects configuration drift, the MCD performs the following tasks:

  • Emits an error to the console logs

  • Emits a Kubernetes event

  • Stops further detection on the node

  • Sets the node and MCP to degraded

You can check if you have a degraded node by listing the MCPs:

$ oc get mcp worker

If you have a degraded MCP, the DEGRADEDMACHINECOUNT field is non-zero, similar to the following output:

Example output
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
worker   rendered-worker-404caf3180818d8ac1f50c32f14b57c3   False     True       True       2              1                   1                     1                      5h51m

You can determine if the problem is caused by configuration drift by examining the machine config pool:

$ oc describe mcp worker
Example output
 ...
    Last Transition Time:  2021-12-20T18:54:00Z
    Message:               Node ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 is reporting: "content mismatch for file \"/etc/mco-test-file\"" (1)
    Reason:                1 nodes are reporting degraded status on sync
    Status:                True
    Type:                  NodeDegraded (2)
 ...
1 This message shows that a node’s /etc/mco-test-file file, which was added by the machine config, has changed outside of the machine config.
2 The state of the node is NodeDegraded.

Or, if you know which node is degraded, examine that node:

$ oc describe node/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4
Example output
 ...

Annotations:        cloud.network.openshift.io/egress-ipconfig: [{"interface":"nic0","ifaddr":{"ipv4":"10.0.128.0/17"},"capacity":{"ip":10}}]
                    csi.volume.kubernetes.io/nodeid:
                      {"pd.csi.storage.gke.io":"projects/openshift-gce-devel-ci/zones/us-central1-a/instances/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4"}
                    machine.openshift.io/machine: openshift-machine-api/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4
                    machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable
                    machineconfiguration.openshift.io/currentConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9
                    machineconfiguration.openshift.io/desiredConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9
                    machineconfiguration.openshift.io/reason: content mismatch for file "/etc/mco-test-file" (1)
                    machineconfiguration.openshift.io/state: Degraded (2)
 ...
1 The error message indicating that configuration drift was detected between the node and the listed machine config. Here the error message indicates that the contents of the /etc/mco-test-file, which was added by the machine config, has changed outside of the machine config.
2 The state of the node is Degraded.

You can correct configuration drift and return the node to the Ready state by performing one of the following remediations:

  • Ensure that the contents and file permissions of the files on the node match what is configured in the machine config. You can manually rewrite the file contents or change the file permissions.

  • Generate a force file on the degraded node. The force file causes the MCD to bypass the usual configuration drift detection and reapplies the current machine config.

    Generating a force file on a node causes that node to reboot.

Checking machine config pool status

To see the status of the Machine Config Operator (MCO), its sub-components, and the resources it manages, use the following oc commands:

Procedure
  1. To see the number of MCO-managed nodes available on your cluster for each machine config pool (MCP), run the following command:

    $ oc get machineconfigpool
    Example output
    NAME      CONFIG                    UPDATED  UPDATING   DEGRADED  MACHINECOUNT  READYMACHINECOUNT  UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT  AGE
    master    rendered-master-06c9c4…   True     False      False     3             3                  3                   0                     4h42m
    worker    rendered-worker-f4b64…    False    True       False     3             2                  2                   0                     4h42m

    where:

    UPDATED

    The True status indicates that the MCO has applied the current machine config to the nodes in that MCP. The current machine config is specified in the STATUS field in the oc get mcp output. The False status indicates a node in the MCP is updating.

    UPDATING

    The True status indicates that the MCO is applying the desired machine config, as specified in the MachineConfigPool custom resource, to at least one of the nodes in that MCP. The desired machine config is the new, edited machine config. Nodes that are updating might not be available for scheduling. The False status indicates that all nodes in the MCP are updated.

    DEGRADED

    A True status indicates the MCO is blocked from applying the current or desired machine config to at least one of the nodes in that MCP, or the configuration is failing. Nodes that are degraded might not be available for scheduling. A False status indicates that all nodes in the MCP are ready.

    MACHINECOUNT

    Indicates the total number of machines in that MCP.

    READYMACHINECOUNT

    Indicates the total number of machines in that MCP that are ready for scheduling.

    UPDATEDMACHINECOUNT

    Indicates the total number of machines in that MCP that have the current machine config.

    DEGRADEDMACHINECOUNT

    Indicates the total number of machines in that MCP that are marked as degraded or unreconcilable.

    In the previous output, there are three control plane (master) nodes and three worker nodes. The control plane MCP and the associated nodes are updated to the current machine config. The nodes in the worker MCP are being updated to the desired machine config. Two of the nodes in the worker MCP are updated and one is still updating, as indicated by the UPDATEDMACHINECOUNT being 2. There are no issues, as indicated by the DEGRADEDMACHINECOUNT being 0 and DEGRADED being False.

    While the nodes in the MCP are updating, the machine config listed under CONFIG is the current machine config, which the MCP is being updated from. When the update is complete, the listed machine config is the desired machine config, which the MCP was updated to.

    If a node is being cordoned, that node is not included in the READYMACHINECOUNT, but is included in the MACHINECOUNT. Also, the MCP status is set to UPDATING. Because the node has the current machine config, it is counted in the UPDATEDMACHINECOUNT total:

    Example output
    NAME      CONFIG                    UPDATED  UPDATING   DEGRADED  MACHINECOUNT  READYMACHINECOUNT  UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT  AGE
    master    rendered-master-06c9c4…   True     False      False     3             3                  3                   0                     4h42m
    worker    rendered-worker-c1b41a…   False    True       False     3             2                  3                   0                     4h42m
  2. To check the status of the nodes in an MCP by examining the MachineConfigPool custom resource, run the following command: :

    $ oc describe mcp worker
    Example output
    ...
      Degraded Machine Count:     0
      Machine Count:              3
      Observed Generation:        2
      Ready Machine Count:        3
      Unavailable Machine Count:  0
      Updated Machine Count:      3
    Events:                       <none>

    If a node is being cordoned, the node is not included in the Ready Machine Count. It is included in the Unavailable Machine Count:

    Example output
    ...
      Degraded Machine Count:     0
      Machine Count:              3
      Observed Generation:        2
      Ready Machine Count:        2
      Unavailable Machine Count:  1
      Updated Machine Count:      3
  3. To see each existing MachineConfig object, run the following command:

    $ oc get machineconfigs
    Example output
    NAME                             GENERATEDBYCONTROLLER          IGNITIONVERSION  AGE
    00-master                        2c9371fbb673b97a6fe8b1c52...   3.2.0            5h18m
    00-worker                        2c9371fbb673b97a6fe8b1c52...   3.2.0            5h18m
    01-master-container-runtime      2c9371fbb673b97a6fe8b1c52...   3.2.0            5h18m
    01-master-kubelet                2c9371fbb673b97a6fe8b1c52…     3.2.0            5h18m
    ...
    rendered-master-dde...           2c9371fbb673b97a6fe8b1c52...   3.2.0            5h18m
    rendered-worker-fde...           2c9371fbb673b97a6fe8b1c52...   3.2.0            5h18m

    Note that the MachineConfig objects listed as rendered are not meant to be changed or deleted.

  4. To view the contents of a particular machine config (in this case, 01-master-kubelet), run the following command:

    $ oc describe machineconfigs 01-master-kubelet

    The output from the command shows that this MachineConfig object contains both configuration files (cloud.conf and kubelet.conf) and a systemd service (Kubernetes Kubelet):

    Example output
    Name:         01-master-kubelet
    ...
    Spec:
      Config:
        Ignition:
          Version:  3.2.0
        Storage:
          Files:
            Contents:
              Source:   data:,
            Mode:       420
            Overwrite:  true
            Path:       /etc/kubernetes/cloud.conf
            Contents:
              Source:   data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous...
            Mode:       420
            Overwrite:  true
            Path:       /etc/kubernetes/kubelet.conf
        Systemd:
          Units:
            Contents:  [Unit]
    Description=Kubernetes Kubelet
    Wants=rpc-statd.service network-online.target crio.service
    After=network-online.target crio.service
    
    ExecStart=/usr/bin/hyperkube \
        kubelet \
          --config=/etc/kubernetes/kubelet.conf \ ...

If something goes wrong with a machine config that you apply, you can always back out that change. For example, if you had run oc create -f ./myconfig.yaml to apply a machine config, you could remove that machine config by running the following command:

$ oc delete -f ./myconfig.yaml

If that was the only problem, the nodes in the affected pool should return to a non-degraded state. This actually causes the rendered configuration to roll back to its previously rendered state.

If you add your own machine configs to your cluster, you can use the commands shown in the previous example to check their status and the related status of the pool to which they are applied.

About checking machine config node status

If you make changes to the default control plane or worker nodes in your cluster, for example by using a machine config or kubelet config, you can get detailed information about the progress of the node updates by using the machine config nodes custom resource. Any change that results in a new machine config triggers the node update process.

You can view this detailed information for nodes in the control plane and worker machine config pools that were created upon OKD installation. You cannot use custom machine config pools.

The MachineConfigNode custom resource allows you to monitor the progress of individual node updates as they move through the upgrade phases. This information can be helpful with troubleshooting if one of the nodes has an issue during the update. The custom resource reports where in the update process the node is at the moment, the phases that have completed, and the phases that are remaining.

The machine config node custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The node update process consists of the following phases and subphases that are tracked by the machine config node custom resource, explained with more detail later in this section:

  • Update Prepared. The MCO stops the configuration drift monitoring process and verifies that the newly-created machine config can be applied to a node.

    • UpdateCompatible

  • Update Executed. The MCO cordons and drains the node and applies the new machine config to the node’s files and operating system, as needed. It contains the following sub-phases:

    • Cordoned

    • Drained

    • AppliedFilesAndOS

  • PinnedImageSetsProgressing The MCO is performing the steps needed to pin and pre-load container images.

  • PinnedImageSetsDegraded The pinned image process failed. You can view the reason for the failure by using the oc describe machineconfignode command, as described later in this section.

  • Update Post update action. The MCO reboots the node or reloads CRI-O, as needed.

    • RebootedNode

    • ReloadedCRIO

  • Update Complete. The MCO uncordons the nodes, updates the node state to the cluster, and resumes producing node metrics.

    • Updated

    • Uncordoned

  • Resumed. The MCO restarts the config drift monitor process and the node returns to operational state.

As the update moves through these phases, you can query the MachineConfigNode custom resource, which reports one of the following conditions for each phase:

  • True. The phase is complete on that node or the MCO has started that phase on that node.

  • False. The phase is either being executed or will not be executed on that node.

  • Unknown. The phase is either being executed on that node or has an error. If the phase has an error, you can use the oc describe machineconfignodes command for more information, as described later in this section.

For example, consider a cluster with a newly-created machine config:

$ oc get machineconfig
Example output
# ...
rendered-master-23cf200e4ee97daa6e39fdce24c9fb67   c00e2c941bc6e236b50e0bf3988e6c790cf2bbb2   3.4.0             6d15h
rendered-master-a386c2d1550b927d274054124f58be68   c00e2c941bc6e236b50e0bf3988e6c790cf2bbb2   3.4.0             7m26s
# ...
rendered-worker-01f27f752eb84eba917450e43636b210   c00e2c941bc6e236b50e0bf3988e6c790cf2bbb2   3.4.0             6d15h (1)
rendered-worker-f351f6947f15cd0380514f4b1c89f8f2   c00e2c941bc6e236b50e0bf3988e6c790cf2bbb2   3.4.0             7m26s (2)
# ...
1 The current machine config for the control plane and worker nodes.
2 A newly-created machine config that is being applied to the control plane and worker nodes.

You can watch as the nodes are updated with the new machine config:

$ oc get machineconfignodes
Example output
NAME                                       POOLNAME   DESIREDCONFIG                                      CURRENTCONFIG                                      UPDATED
ci-ln-ds73n5t-72292-9xsm9-master-0         master     rendered-master-a386c2d1550b927d274054124f58be68   rendered-master-a386c2d1550b927d274054124f58be68   True
ci-ln-ds73n5t-72292-9xsm9-master-1         master     rendered-master-a386c2d1550b927d274054124f58be68   rendered-master-23cf200e4ee97daa6e39fdce24c9fb67   False
ci-ln-ds73n5t-72292-9xsm9-master-2         master     rendered-master-23cf200e4ee97daa6e39fdce24c9fb67   rendered-master-23cf200e4ee97daa6e39fdce24c9fb67   True
ci-ln-ds73n5t-72292-9xsm9-worker-a-2d8tz   worker     rendered-worker-f351f6947f15cd0380514f4b1c89f8f2   rendered-worker-f351f6947f15cd0380514f4b1c89f8f2   True (1)
ci-ln-ds73n5t-72292-9xsm9-worker-b-gw5sd   worker     rendered-worker-f351f6947f15cd0380514f4b1c89f8f2   rendered-worker-01f27f752eb84eba917450e43636b210   False (2)
ci-ln-ds73n5t-72292-9xsm9-worker-c-t227w   worker     rendered-worker-01f27f752eb84eba917450e43636b210   rendered-worker-01f27f752eb84eba917450e43636b210   True (3)
1 This node has been updated. The new machine config, rendered-worker-f351f6947f15cd0380514f4b1c89f8f2, is shown as the desired and current machine configs.
2 This node is currently being updated to the new machine config. The previous and new machine configs are shown as the desired and current machine configs, respectively.
3 This node has not yet been updated to the new machine config. The previous machine config is shown as the desired and current machine configs.
Table 1. Basic machine config node fields
Field Meaning

NAME

The name of the node.

POOLNAME

The name of the machine config pool associated with that node.

DESIREDCONFIG

The name of the new machine config that the node updates to.

CURRENTCONFIG

The name of the current machine configuration on that node.

UPDATED

Indicates if the node has been updated with one of the following conditions:

  • If False, the node is being updated to the new machine configuration shown in the DESIREDCONFIG field.

  • If True, and the CURRENTCONFIG matches the new machine configuration shown in the DESIREDCONFIG field, the node has been updated.

  • If True, and the CURRENTCONFIG matches the old machine configuration shown in the DESIREDCONFIG field, that node has not been updated yet.

You can use the -o wide flag to display additional information about the updates:

$ oc get machineconfignodes -o wide
Example output
$ oc get machineconfignode -o wide
NAME                                       POOLNAME   DESIREDCONFIG                                      CURRENTCONFIG                                      UPDATED   UPDATEPREPARED   UPDATEEXECUTED   UPDATEPOSTACTIONCOMPLETE   UPDATECOMPLETE   RESUMED   UPDATECOMPATIBLE   UPDATEDFILESANDOS   CORDONEDNODE   DRAINEDNODE   REBOOTEDNODE   RELOADEDCRIO   UNCORDONEDNODE
ci-ln-ds73n5t-72292-9xsm9-master-0         master     rendered-master-23cf200e4ee97daa6e39fdce24c9fb67   rendered-master-23cf200e4ee97daa6e39fdce24c9fb67   True      False            False            False                      False            False     False              False               False          False         False          False          False
ci-ln-ds73n5t-72292-9xsm9-master-1         master     rendered-master-23cf200e4ee97daa6e39fdce24c9fb67   rendered-master-23cf200e4ee97daa6e39fdce24c9fb67   True      False            False            False                      False            False     False              False               False          False         False          False          False
ci-ln-ds73n5t-72292-9xsm9-master-2         master     rendered-master-23cf200e4ee97daa6e39fdce24c9fb67   rendered-master-23cf200e4ee97daa6e39fdce24c9fb67   True      False            False            False                      False            False     False              False               False          False         False          False          False
ci-ln-ds73n5t-72292-9xsm9-worker-a-2d8tz   worker     rendered-worker-f351f6947f15cd0380514f4b1c89f8f2   rendered-worker-f351f6947f15cd0380514f4b1c89f8f2   True      False            False            False                      False            False     False              False               False          False         False          False          False
ci-ln-ds73n5t-72292-9xsm9-worker-b-gw5sd   worker     rendered-worker-f351f6947f15cd0380514f4b1c89f8f2   rendered-worker-01f27f752eb84eba917450e43636b210   False     True             True             Unknown                    False            False     True               True                True           True          Unknown        False          False
ci-ln-ds73n5t-72292-9xsm9-worker-c-t227w   worker     rendered-worker-01f27f752eb84eba917450e43636b210   rendered-worker-01f27f752eb84eba917450e43636b210   True      False            False            False                      False            False     False              False               False          False         False          False          False

In addition to the fields defined in the previous table, the -o wide output displays the following fields:

Table 2. Machine config node fields in the -o wide output
Phase Name Definition

UPDATEPREPARED

Indicates if the MCO is preparing to update the node.

UPDATEEXECUTED

Indicates if the MCO has completed the body of the update on the node.

UPDATEPOSTACTIONCOMPLETE

Indicates if the MCO has executed the post-update actions on the node.

UPDATECOMPLETE

Indicates if the MCO has completed the update on the node.

RESUMED

Indicates if the node has resumed normal processes.

UPDATECOMPATIBLE

Indicates if the MCO has determined it can execute the update on the node.

UPDATEDFILESANDOS

Indicates if the MCO has updated the node files and operating system.

CORDONEDNODE

Indicates if the MCO has marked the node as not schedulable.

DRAINEDNODE

Indicates if the MCO has drained the node.

REBOOTEDNODE

Indicates if the MCO has restarted the node.

RELOADEDCRIO

Indicates if the MCO has restarted the CRI-O service.

UNCORDONEDNODE

Indicates if the MCO has marked the node as schedulable.

For more details on the update status, you can use the oc describe machineconfignode command:

$ oc describe machineconfignode/<machine_config_node_name> -o yaml
Example output
Name:         <machine_config_node_name> (1)
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  machineconfiguration.openshift.io/v1alpha1
Kind:         MachineConfigNode
Metadata:
  Creation Timestamp:  2023-10-17T13:08:58Z
  Generation:          1
  Resource Version:    49443
  UID:                 4bd758ab-2187-413c-ac42-882e61761b1d
Spec:
  Node Ref:
    Name:         <node_name>
  Pool:
    Name:         worker
  ConfigVersion:
    Desired: rendered-worker-f351f6947f15cd0380514f4b1c89f8f2 (2)
Status:
  Conditions:
    Last Transition Time:  2025-01-14T17:01:16Z
    Message:               Node ci-ln-ds73n5t-72292-9xsm9-worker-b-gw5sd needs an update
    Reason:                Updated
    Status:                False
    Type:                  Updated
    Last Transition Time:  2025-01-14T17:01:18Z
    Message:               Update is Compatible.
    Reason:                UpdateCompatible
    Status:                True
    Type:                  UpdatePrepared
    Last Transition Time:  2025-01-14T17:04:08Z
    Message:               Updated the Files and OS on disk as a part of the in progress phase
    Reason:                AppliedFilesAndOS
    Status:                True
    Type:                  UpdateExecuted
    Last Transition Time:  2025-01-14T17:04:08Z
    Message:               Node will reboot into config rendered-worker-db01b33f959e5645a721da50a6db1fbb
    Reason:                RebootedNode
    Status:                Unknown
    Type:                  UpdatePostActionComplete
    Last Transition Time:  2025-01-14T16:04:27Z
    Message:               Action during update to rendered-worker-f351f6947f15cd0380514f4b1c89f8f2: UnCordoned Node as part of completing upgrade phase
    Reason:                Uncordoned
    Status:                False
    Type:                  UpdateComplete
    Last Transition Time:  2025-01-14T16:04:27Z
    Message:               Action during update to rendered-worker-f351f6947f15cd0380514f4b1c89f8f2: In desired config rendered-worker-01f27f752eb84eba917450e43636b210. Resumed normal operations.
    Reason:                Resumed
    Status:                False
    Type:                  Resumed
    Last Transition Time:  2025-01-14T17:01:18Z
    Message:               Update Compatible. Post Cfg Actions []: Drain Required: true
    Reason:                UpdatePreparedUpdateCompatible
    Status:                True
    Type:                  UpdateCompatible
    Last Transition Time:  2025-01-14T17:03:57Z
    Message:               Drained node. The drain is complete as the desired drainer matches current drainer: drain-rendered-worker-db01b33f959e5645a721da50a6db1fbb
    Reason:                UpdateExecutedDrained
    Status:                True
    Type:                  Drained
    Last Transition Time:  2025-01-14T17:04:08Z
    Message:               Applied files and new OS config to node. OS did not need an update. SSH Keys did not need an update
    Reason:                UpdateExecutedAppliedFilesAndOS
    Status:                True
    Type:                  AppliedFilesAndOS
    Last Transition Time:  2025-01-14T17:01:23Z
    Message:               Cordoned node. The node is reporting Unschedulable = true
    Reason:                UpdateExecutedCordoned
    Status:                True
    Type:                  Cordoned
    Last Transition Time:  2025-01-14T17:04:08Z
    Message:               Upgrade requires a reboot. Currently doing this as the post update action.
    Reason:                UpdatePostActionCompleteRebootedNode
    Status:                Unknown
    Type:                  RebootedNode
    Last Transition Time:  2025-01-14T15:30:57Z
    Message:               This node has not yet entered the ReloadedCRIO phase
    Reason:                NotYetOccured
    Status:                False
    Type:                  ReloadedCRIO
    Last Transition Time:  2025-01-14T16:04:27Z
    Message:               Action during update to rendered-worker-f351f6947f15cd0380514f4b1c89f8f2: UnCordoned node. The node is reporting Unschedulable = false
    Reason:                UpdateCompleteUncordoned
    Status:                False
    Type:                  Uncordoned
    Last Transition Time:  2025-01-14T16:04:07Z
    Message:               All is good
    Reason:                AsExpected
    Status:                False
    Type:                  PinnedImageSetsDegraded
    Last Transition Time:  2025-01-14T16:04:07Z
    Message:               All pinned image sets complete
    Reason:                AsExpected
    Status:                False
    Type:                  PinnedImageSetsProgressing
  Config Version:
    Current:            rendered-worker-01f27f752eb84eba917450e43636b210 (3)
    Desired:            rendered-worker-f351f6947f15cd0380514f4b1c89f8f2
  Observed Generation:  6
# ...
1 The MachineConfigNode object name.
2 The new machine configuration. This field updates after the MCO validates the machine config in the UPDATEPREPARED phase, then the status adds the new configuration.
3 The current machine config on the node.

Checking machine config node status

During the update of a machine config pool (MCP), you can monitor the progress of all control plane and worker nodes in your cluster by using the oc get machineconfignodes and oc describe machineconfignodes commands. These commands provide information that can be helpful if issues arise during the update and you need to troubleshoot a node.

You cannot use these commands with custom machine config pools.

For more information on the meaning of these fields, see "About checking machine config node status."

Prerequisites
  • You enabled the required Technology Preview features for your cluster by editing the FeatureGate CR named cluster:

    $ oc edit featuregate cluster
    Example FeatureGate CR
    apiVersion: config.openshift.io/v1
    kind: FeatureGate
    metadata:
      name: cluster
    spec:
      featureSet: TechPreviewNoUpgrade

    Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them. Do not enable this feature set on production clusters.

    After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied.

Procedure
  • View the update status of all nodes in the cluster, including the current and desired machine configurations, by running the following command:

    $ oc get machineconfignodes
    Example output
    NAME                                       POOLNAME   DESIREDCONFIG                                      CURRENTCONFIG                                      UPDATED
    ci-ln-mdb23yt-72292-kzdsg-master-0         master     rendered-master-f21b093d20f68a7c06f922ed3ea5fbc8   rendered-master-1abc053eec29e6c945670f39d6dc8afa   False
    ci-ln-mdb23yt-72292-kzdsg-master-1         master     rendered-master-1abc053eec29e6c945670f39d6dc8afa   rendered-master-1abc053eec29e6c945670f39d6dc8afa   True
    ci-ln-mdb23yt-72292-kzdsg-master-2         master     rendered-master-1abc053eec29e6c945670f39d6dc8afa   rendered-master-1abc053eec29e6c945670f39d6dc8afa   True
    ci-ln-mdb23yt-72292-kzdsg-worker-a-gfqjr   worker     rendered-worker-d0130cd74e9e576d7ba78ce166272bfb   rendered-worker-8f61bf839898a4487c3b5263a430e94a   False
    ci-ln-mdb23yt-72292-kzdsg-worker-b-gknq4   worker     rendered-worker-8f61bf839898a4487c3b5263a430e94a   rendered-worker-8f61bf839898a4487c3b5263a430e94a   True
    ci-ln-mdb23yt-72292-kzdsg-worker-c-mffrx   worker     rendered-worker-8f61bf839898a4487c3b5263a430e94a   rendered-worker-8f61bf839898a4487c3b5263a430e94a   True
  • View of all machine config node status fields for the nodes in your cluster by running the following command:

    $ oc get machineconfignodes -o wide
    Example output
    NAME                                       POOLNAME   DESIREDCONFIG                                      CURRENTCONFIG                                      UPDATED   UPDATEPREPARED   UPDATEEXECUTED   UPDATEPOSTACTIONCOMPLETE   UPDATECOMPLETE   RESUMED   UPDATECOMPATIBLE   UPDATEDFILESANDOS   CORDONEDNODE   DRAINEDNODE   REBOOTEDNODE   RELOADEDCRIO   UNCORDONEDNODE
    ci-ln-g6dr34b-72292-g9btv-master-0         master     rendered-master-d4e122320b351cdbe1df59ddb63ddcfc   rendered-master-6f2064fcb36d2a914de5b0c660dc49ff   False     True             Unknown          False                      False            False     True               Unknown             False          False         False          False          False
    ci-ln-g6dr34b-72292-g9btv-master-1         master     rendered-master-6f2064fcb36d2a914de5b0c660dc49ff   rendered-master-6f2064fcb36d2a914de5b0c660dc49ff   True      False            False            False                      False            False     False              False               False          False         False          False          False
    ci-ln-g6dr34b-72292-g9btv-master-2         master     rendered-master-6f2064fcb36d2a914de5b0c660dc49ff   rendered-master-6f2064fcb36d2a914de5b0c660dc49ff   True      False            False            False                      False            False     False              False               False          False         False          False          False
    ci-ln-g6dr34b-72292-g9btv-worker-a-sjh5r   worker     rendered-worker-671b88c8c569fa3f60dc1a27cf9c91f2   rendered-worker-d5534cb730e5e108905fc285c2a42b6c   False     True             Unknown          False                      False            False     True               Unknown             False          False         False          False          False
    ci-ln-g6dr34b-72292-g9btv-worker-b-xthbz   worker     rendered-worker-d5534cb730e5e108905fc285c2a42b6c   rendered-worker-d5534cb730e5e108905fc285c2a42b6c   True      False            False            False                      False            False     False              False               False          False         False          False          False
    ci-ln-g6dr34b-72292-g9btv-worker-c-gnpd6   worker     rendered-worker-d5534cb730e5e108905fc285c2a42b6c   rendered-worker-d5534cb730e5e108905fc285c2a42b6c   True      False            False            False                      False            False     False              False               False          False         False          False          False
  • Check the update status of nodes in a specific machine config pool by running the following command:

    $ oc get machineconfignodes $(oc get machineconfignodes -o json | jq -r '.items[]|select(.spec.pool.name=="<pool_name>")|.metadata.name') (1)
    Example output
    NAME                                       POOLNAME   DESIREDCONFIG                                      CURRENTCONFIG                                      UPDATED
    ci-ln-g6dr34b-72292-g9btv-worker-a-sjh5r   worker     rendered-worker-d5534cb730e5e108905fc285c2a42b6c   rendered-worker-d5534cb730e5e108905fc285c2a42b6c   True
    ci-ln-g6dr34b-72292-g9btv-worker-b-xthbz   worker     rendered-worker-d5534cb730e5e108905fc285c2a42b6c   rendered-worker-faf6b50218a8bbce21f1370866283de5   False
    ci-ln-g6dr34b-72292-g9btv-worker-c-gnpd6   worker     rendered-worker-faf6b50218a8bbce21f1370866283de5   rendered-worker-faf6b50218a8bbce21f1370866283de5   True
  • Check the update status of an individual node by running the following command:

    $ oc describe machineconfignode/<node_name>
    Example output
    Name:         <node_name>
    Namespace:
    Labels:       <none>
    Annotations:  <none>
    API Version:  machineconfiguration.openshift.io/v1alpha1
    Kind:         MachineConfigNode
    Metadata:
      Creation Timestamp:  2023-10-17T13:08:58Z
      Generation:          1
      Resource Version:    49443
      UID:                 4bd758ab-2187-413c-ac42-882e61761b1d
    Spec:
      Node Ref:
        Name:         <node_name>
      Pool:
        Name:         master
      ConfigVersion:
        Desired: rendered-worker-823ff8dc2b33bf444709ed7cd2b9855b
    Status:
    # ...
        Message:               Drained node. The drain is complete as the desired drainer matches current drainer: drain-rendered-worker-01f27f752eb84eba917450e43636b210
        Reason:                UpdateExecutedDrained
        Status:                True
        Type:                  Drained
        Last Transition Time:  2025-01-14T15:45:55Z
    # ...
      Config Version:
        Current:            rendered-master-8110974a5cea69dff5b263237b58abd8
        Desired:            rendered-worker-823ff8dc2b33bf444709ed7cd2b9855b
      Observed Generation:  6
    # ...
Additional resources

Understanding Machine Config Operator certificates

Machine Config Operator certificates are used to secure connections between the Red Hat Enterprise Linux CoreOS (RHCOS) nodes and the Machine Config Server. For more information, see Machine Config Operator certificates.

Viewing and interacting with certificates

The following certificates are handled in the cluster by the Machine Config Controller (MCC) and can be found in the ControllerConfig resource:

  • /etc/kubernetes/kubelet-ca.crt

  • /etc/kubernetes/static-pod-resources/configmaps/cloud-config/ca-bundle.pem

  • /etc/pki/ca-trust/source/anchors/openshift-config-user-ca-bundle.crt

The MCC also handles the image registry certificates and its associated user bundle certificate.

You can get information about the listed certificates, including the underyling bundle the certificate comes from, and the signing and subject data.

Prerequisites
  • This procedure contains optional steps that require that the python-yq RPM package is installed.

Procedure
  • Get detailed certificate information by running the following command:

    $ oc get controllerconfig/machine-config-controller -o yaml | yq -y '.status.controllerCertificates'
    Example output
    - bundleFile: KubeAPIServerServingCAData
      notAfter: '2034-10-23T13:13:02Z'
      notBefore: '2024-10-25T13:13:02Z'
      signer: CN=admin-kubeconfig-signer,OU=openshift
      subject: CN=admin-kubeconfig-signer,OU=openshift
    - bundleFile: KubeAPIServerServingCAData
      notAfter: '2024-10-26T13:13:05Z'
      notBefore: '2024-10-25T13:27:14Z'
      signer: CN=kubelet-signer,OU=openshift
      subject: CN=kube-csr-signer_@1729862835
    - bundleFile: KubeAPIServerServingCAData
      notAfter: '2024-10-26T13:13:05Z'
      notBefore: '2024-10-25T13:13:05Z'
      signer: CN=kubelet-signer,OU=openshift
      subject: CN=kubelet-signer,OU=openshift
    # ...
  • Get a simpler version of the information found in the ControllerConfig resource by checking the machine config pool status using the following command:

    $ oc get mcp master -o yaml | yq -y '.status.certExpirys'
    Example output
    - bundle: KubeAPIServerServingCAData
      expiry: '2034-10-23T13:13:02Z'
      subject: CN=admin-kubeconfig-signer,OU=openshift
    - bundle: KubeAPIServerServingCAData
      expiry: '2024-10-26T13:13:05Z'
      subject: CN=kube-csr-signer_@1729862835
    - bundle: KubeAPIServerServingCAData
      expiry: '2024-10-26T13:13:05Z'
      subject: CN=kubelet-signer,OU=openshift
    - bundle: KubeAPIServerServingCAData
      expiry: '2025-10-25T13:13:05Z'
      subject: CN=kube-apiserver-to-kubelet-signer,OU=openshift
    # ...

    This method is meant for OKD applications that already consume machine config pool information.

  • Check which image registry certificates are on the nodes:

    1. Log in to a node:

      $ oc debug node/<node_name>
    2. Set /host as the root directory within the debug shell:

      sh-5.1# chroot /host
    3. Look at the contents of the /etc/docker/cert.d directory:

      sh-5.1# ls /etc/docker/certs.d
      Example output
      image-registry.openshift-image-registry.svc.cluster.local:5000
      image-registry.openshift-image-registry.svc:5000