This is a cache of https://docs.okd.io/4.11/post_installation_configuration/cluster-tasks.html. It is a snapshot of the page at 2024-11-24T20:00:30.317+0000.
Cluster tasks | Post-installation configuration | OKD 4.11
×

After installing OKD, you can further expand and customize your cluster to your requirements.

Available cluster customizations

You complete most of the cluster configuration and customization after you deploy your OKD cluster. A number of configuration resources are available.

If you install your cluster on IBM Z, not all features and functions are available.

You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider.

For current documentation of the settings that you control by using these resources, use the oc explain command, for example oc explain builds --api-version=config.openshift.io/v1

Cluster configuration resources

All cluster configuration resources are globally scoped (not namespaced) and named cluster.

Resource name Description

apiserver.config.openshift.io

Provides API server configuration such as certificates and certificate authorities.

authentication.config.openshift.io

Controls the identity provider and authentication configuration for the cluster.

build.config.openshift.io

Controls default and enforced configuration for all builds on the cluster.

console.config.openshift.io

Configures the behavior of the web console interface, including the logout behavior.

featuregate.config.openshift.io

Enables FeatureGates so that you can use Tech Preview features.

image.config.openshift.io

Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details).

ingress.config.openshift.io

Configuration details related to routing such as the default domain for routes.

oauth.config.openshift.io

Configures identity providers and other behavior related to internal OAuth server flows.

project.config.openshift.io

Configures how projects are created including the project template.

proxy.config.openshift.io

Defines proxies to be used by components needing external network access. Note: not all components currently consume this value.

scheduler.config.openshift.io

Configures scheduler behavior such as profiles and default node selectors.

Operator configuration resources

These configuration resources are cluster-scoped instances, named cluster, which control the behavior of a specific component as owned by a particular Operator.

Resource name Description

consoles.operator.openshift.io

Controls console appearance such as branding customizations

config.imageregistry.operator.openshift.io

Configures OpenShift image registry settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type.

config.samples.operator.openshift.io

Configures the Samples Operator to control which example image streams and templates are installed on the cluster.

Additional configuration resources

These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances.

Resource name Instance name Namespace Description

alertmanager.monitoring.coreos.com

main

openshift-monitoring

Controls the Alertmanager deployment parameters.

ingresscontroller.operator.openshift.io

default

openshift-ingress-operator

Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement.

Informational Resources

You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly.

Resource name Instance name Description

clusterversion.config.openshift.io

version

In OKD 4.11, you must not customize the ClusterVersion resource for production clusters. Instead, follow the process to update a cluster.

dns.config.openshift.io

cluster

You cannot modify the DNS settings for your cluster. You can view the DNS Operator status.

infrastructure.config.openshift.io

cluster

Configuration details allowing the cluster to interact with its cloud provider.

network.config.openshift.io

cluster

You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation.

Updating the global cluster pull secret

You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret.

The procedure is required when users use a separate registry to store images than the registry used during installation.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

Procedure
  1. Optional: To append a new pull secret to the existing pull secret, complete the following steps:

    1. Enter the following command to download the pull secret:

      $ oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> (1)
      1 Provide the path to the pull secret file.
    2. Enter the following command to add the new pull secret:

      $ oc registry login --registry="<registry>" \ (1)
      --auth-basic="<username>:<password>" \ (2)
      --to=<pull_secret_location> (3)
      
      1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>".
      2 Provide the credentials of the new registry.
      3 Provide the path to the pull secret file.

      Alternatively, you can perform a manual update to the pull secret file.

  2. Enter the following command to update the global pull secret for your cluster:

    $ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> (1)
    1 Provide the path to the new pull secret file.

    This update is rolled out to all nodes, which can take some time depending on the size of your cluster.

    As of OKD 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot.

Adjust worker nodes

If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new machine sets, scale them up, then scale the original machine set down before removing them.

Understanding the difference between machine sets and the machine config pool

MachineSet objects describe OKD nodes with respect to the cloud or machine provider.

The MachineConfigPool object allows MachineConfigController components to define and provide the status of machines in the context of upgrades.

The MachineConfigPool object allows users to configure how upgrades are rolled out to the OKD nodes in the machine config pool.

The NodeSelector object can be replaced with a reference to the MachineSet object.

Scaling a machine set manually

To add or remove an instance of a machine in a machine set, you can manually scale the machine set.

This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets.

Prerequisites
  • Install an OKD cluster and the oc command line.

  • Log in to oc as a user with cluster-admin permission.

Procedure
  1. View the machine sets that are in the cluster:

    $ oc get machinesets -n openshift-machine-api

    The machine sets are listed in the form of <clusterid>-worker-<aws-region-az>.

  2. View the machines that are in the cluster:

    $ oc get machine -n openshift-machine-api
  3. Set the annotation on the machine that you want to delete:

    $ oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine="true"
  4. Scale the machine set by running one of the following commands:

    $ oc scale --replicas=2 machineset <machineset> -n openshift-machine-api

    Or:

    $ oc edit machineset <machineset> -n openshift-machine-api

    You can alternatively apply the following YAML to scale the machine set:

    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    metadata:
      name: <machineset>
      namespace: openshift-machine-api
    spec:
      replicas: 2

    You can scale the machine set up or down. It takes several minutes for the new machines to be available.

    By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine.

    You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine.

Verification
  • Verify the deletion of the intended machine:

    $ oc get machines

The machine set deletion policy

Random, Newest, and Oldest are the three supported deletion options. The default is Random, meaning that random machines are chosen and deleted when scaling machine sets down. The deletion policy can be set according to the use case by modifying the particular machine set:

spec:
  deletePolicy: <delete_policy>
  replicas: <desired_replica_count>

Specific machines can also be prioritized for deletion by adding the annotation machine.openshift.io/cluster-api-delete-machine=true to the machine of interest, regardless of the deletion policy.

By default, the OKD router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker machine set to 0 unless you first relocate the router pods.

Custom machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker machine sets are scaling down. This prevents service disruption.

Creating default cluster-wide node selectors

You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes.

With cluster-wide node selectors, when you create a pod in that cluster, OKD adds the default node selectors to the pod and schedules the pod on nodes with matching labels.

You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down.

You can add additional key/value pairs to a pod. But you cannot add a different value for a default key.

Procedure

To add a default cluster-wide node selector:

  1. Edit the Scheduler Operator CR to add the default cluster-wide node selectors:

    $ oc edit scheduler cluster
    Example Scheduler Operator CR with a node selector
    apiVersion: config.openshift.io/v1
    kind: Scheduler
    metadata:
      name: cluster
    ...
    spec:
      defaultNodeSelector: type=user-node,region=east (1)
      mastersSchedulable: false
    1 Add a node selector with the appropriate <key>:<value> pairs.

    After making this change, wait for the pods in the openshift-kube-apiserver project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy.

  2. Add labels to a node by using a machine set or editing the node directly:

    • Use a machine set to add labels to nodes managed by the machine set when a node is created:

      1. Run the following command to add labels to a MachineSet object:

        $ oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]'  -n openshift-machine-api (1)
        1 Add a <key>/<value> pair for each label.

        For example:

        $ oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]'  -n openshift-machine-api

        You can alternatively apply the following YAML to add labels to a machine set:

        apiVersion: machine.openshift.io/v1beta1
        kind: MachineSet
        metadata:
          name: <machineset>
          namespace: openshift-machine-api
        spec:
          template:
            spec:
              metadata:
                labels:
                  region: "east"
                  type: "user-node"
      2. Verify that the labels are added to the MachineSet object by using the oc edit command:

        For example:

        $ oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api
        Example MachineSet object
        apiVersion: machine.openshift.io/v1beta1
        kind: MachineSet
          ...
        spec:
          ...
          template:
            metadata:
          ...
            spec:
              metadata:
                labels:
                  region: east
                  type: user-node
          ...
      3. Redeploy the nodes associated with that machine set by scaling down to 0 and scaling up the nodes:

        For example:

        $ oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
        $ oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api
      4. When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command:

        $ oc get nodes -l <key>=<value>

        For example:

        $ oc get nodes -l type=user-node
        Example output
        NAME                                       STATUS   ROLES    AGE   VERSION
        ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp   Ready    worker   61s   v1.24.0
    • Add labels directly to a node:

      1. Edit the Node object for the node:

        $ oc label nodes <name> <key>=<value>

        For example, to label a node:

        $ oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east

        You can alternatively apply the following YAML to add labels to a node:

        kind: Node
        apiVersion: v1
        metadata:
          name: <node_name>
          labels:
            type: "user-node"
            region: "east"
      2. Verify that the labels are added to the node using the oc get command:

        $ oc get nodes -l <key>=<value>,<key>=<value>

        For example:

        $ oc get nodes -l type=user-node,region=east
        Example output
        NAME                                       STATUS   ROLES    AGE   VERSION
        ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49   Ready    worker   17m   v1.24.0

Improving cluster stability in high latency environments using worker latency profiles

If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator need change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner.

The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OKD cluster. The Kubernetes Controller Manager (kube controller) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is:

  1. The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`.

  2. In response, the scheduler stops scheduling pods to that node.

  3. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default.

This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy.

To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal.

These worker latency profiles contain three sets of parameters that are pre-defined with carefully tuned values to control the reaction of the cluster to increased latency. No need to experimentally find the best values manually.

You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network.

Understanding worker latency profiles

Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency, node-monitor-grace-period, default-not-ready-toleration-seconds and default-unreachable-toleration-seconds. These parameters can use values which allow you control the reaction of the cluster to latency issues without needing to determine the best values using manual methods.

Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability.

All worker latency profiles configure the following parameters:

node-status-update-frequency

Specifies how often the kubelet posts node status to the API server.

node-monitor-grace-period

Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint to the node.

default-not-ready-toleration-seconds

Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node.

default-unreachable-toleration-seconds

Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node.

The following Operators monitor the changes to the worker latency profiles and respond accordingly:

  • The Machine Config Operator (MCO) updates the node-status-update-frequency parameter on the worker nodes.

  • The Kubernetes Controller Manager updates the node-monitor-grace-period parameter on the control plane nodes.

  • The Kubernetes API Server Operator updates the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds parameters on the control plane nodes.

While the default configuration works in most cases, OKD offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections:

Default worker latency profile

With the Default profile, each Kubelet updates it’s status every 10 seconds (node-status-update-frequency). The Kube Controller Manager checks the statuses of Kubelet every 5 seconds (node-monitor-grace-period).

The Kubernetes Controller Manager waits 40 seconds for a status update from Kubelet before considering the Kubelet unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint and evicts the pods on that node.

If a pod on that node has the NoExecute taint, the pod is run according to tolerationSeconds. If the pod has no taint, it will be evicted in 300 seconds (default-not-ready-toleration-seconds and default-unreachable-toleration-seconds settings of the Kube API Server).

Profile Component Parameter Value

Default

kubelet

node-status-update-frequency

10s

Kubelet Controller Manager

node-monitor-grace-period

40s

Kubernetes API Server Operator

default-not-ready-toleration-seconds

300s

Kubernetes API Server Operator

default-unreachable-toleration-seconds

300s

Medium worker latency profile

Use the MediumUpdateAverageReaction profile if the network latency is slightly higher than usual.

The MediumUpdateAverageReaction profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter.

The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts.

Profile Component Parameter Value

MediumUpdateAverageReaction

kubelet

node-status-update-frequency

20s

Kubelet Controller Manager

node-monitor-grace-period

2m

Kubernetes API Server Operator

default-not-ready-toleration-seconds

60s

Kubernetes API Server Operator

default-unreachable-toleration-seconds

60s

Low worker latency profile

Use the LowUpdateSlowReaction profile if the network latency is extremely high.

The LowUpdateSlowReaction profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter.

The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts.

Profile Component Parameter Value

LowUpdateSlowReaction

kubelet

node-status-update-frequency

1m

Kubelet Controller Manager

node-monitor-grace-period

5m

Kubernetes API Server Operator

default-not-ready-toleration-seconds

60s

Kubernetes API Server Operator

default-unreachable-toleration-seconds

60s

Using and changing worker latency profiles

To change a worker latency profile to deal with network latency, edit the node.config object to add the name of the profile. You can change the profile at any time as latency increases or decreases.

You must move one worker latency profile at a time. For example, you cannot move directly from the Default profile to the LowUpdateSlowReaction worker latency profile. You must move from the Default worker latency profile to the MediumUpdateAverageReaction profile first, then to LowUpdateSlowReaction. Similarly, when returning to the Default profile, you must move from the low profile to the medium profile first, then to Default.

You can also configure worker latency profiles upon installing an OKD cluster.

Procedure

To move from the default worker latency profile:

  1. Move to the medium worker latency profile:

    1. Edit the node.config object:

      $ oc edit nodes.config/cluster
    2. Add spec.workerLatencyProfile: MediumUpdateAverageReaction:

      Example node.config object
      apiVersion: config.openshift.io/v1
      kind: Node
      metadata:
        annotations:
          include.release.openshift.io/ibm-cloud-managed: "true"
          include.release.openshift.io/self-managed-high-availability: "true"
          include.release.openshift.io/single-node-developer: "true"
          release.openshift.io/create-only: "true"
        creationTimestamp: "2022-07-08T16:02:51Z"
        generation: 1
        name: cluster
        ownerReferences:
        - apiVersion: config.openshift.io/v1
          kind: ClusterVersion
          name: version
          uid: 36282574-bf9f-409e-a6cd-3032939293eb
        resourceVersion: "1865"
        uid: 0c0f7a4c-4307-4187-b591-6155695ac85b
      spec:
        workerLatencyProfile: MediumUpdateAverageReaction (1)
      
      # ...
      1 Specifies the medium worker latency policy.

      Scheduling on each worker node is disabled as the change is being applied.

  2. Optional: Move to the low worker latency profile:

    1. Edit the node.config object:

      $ oc edit nodes.config/cluster
    2. Change the spec.workerLatencyProfile value to LowUpdateSlowReaction:

      Example node.config object
      apiVersion: config.openshift.io/v1
      kind: Node
      metadata:
        annotations:
          include.release.openshift.io/ibm-cloud-managed: "true"
          include.release.openshift.io/self-managed-high-availability: "true"
          include.release.openshift.io/single-node-developer: "true"
          release.openshift.io/create-only: "true"
        creationTimestamp: "2022-07-08T16:02:51Z"
        generation: 1
        name: cluster
        ownerReferences:
        - apiVersion: config.openshift.io/v1
          kind: ClusterVersion
          name: version
          uid: 36282574-bf9f-409e-a6cd-3032939293eb
        resourceVersion: "1865"
        uid: 0c0f7a4c-4307-4187-b591-6155695ac85b
      spec:
        workerLatencyProfile: LowUpdateSlowReaction (1)
      
      # ...
      1 Specifies use of the low worker latency policy.

Scheduling on each worker node is disabled as the change is being applied.

Verification
  • When all nodes return to the Ready condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied:

    $ oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5
    Example output
    # ...
        - lastTransitionTime: "2022-07-11T19:47:10Z"
          reason: ProfileUpdated
          status: "False"
          type: WorkerLatencyProfileProgressing
        - lastTransitionTime: "2022-07-11T19:47:10Z" (1)
          message: all static pod revision(s) have updated latency profile
          reason: ProfileUpdated
          status: "True"
          type: WorkerLatencyProfileComplete
        - lastTransitionTime: "2022-07-11T19:20:11Z"
          reason: AsExpected
          status: "False"
          type: WorkerLatencyProfileDegraded
        - lastTransitionTime: "2022-07-11T19:20:36Z"
          status: "False"
    # ...
    1 Specifies that the profile is applied and active.

To change the medium profile to default or change the default to medium, edit the node.config object and set the spec.workerLatencyProfile parameter to the appropriate value.

Creating infrastructure machine sets for production environments

You can create a machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.

In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.

For information on infrastructure nodes and which components can run on infrastructure nodes, see Creating infrastructure machine sets.

To create an infrastructure node, you can use a machine set, post_installation_configuration/cluster-tasks.adoc#creating-an-infra-node_post-install-cluster-tasks[assign a label to the nodes], or use a machine config pool.

For sample machine sets that you can use with these procedures, see Creating machine sets for different clouds.

Applying a specific node selector to all infrastructure components causes OKD to schedule those workloads on nodes with that label.

Creating a machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites
  • Deploy an OKD cluster.

  • Install the OpenShift CLI (oc).

  • Log in to oc as a user with cluster-admin permission.

Procedure
  1. Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api
      Example output
      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m
    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml
      Example output
      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
        name: <infrastructure_id>-<role> (2)
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: (3)
              ...
      1 The cluster infrastructure ID.
      2 A default node label.

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml
Verification
  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api
    Example output
    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again.

Creating an infrastructure node

See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API.

Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app, nodes through labeling.

Procedure
  1. Add a label to the worker node that you want to act as application node:

    $ oc label node <node-name> node-role.kubernetes.io/app=""
  2. Add a label to the worker nodes that you want to act as infrastructure nodes:

    $ oc label node <node-name> node-role.kubernetes.io/infra=""
  3. Check to see if applicable nodes now have the infra role and app roles:

    $ oc get nodes
  4. Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod’s selector.

    If the default node selector key conflicts with the key of a pod’s label, then the default node selector is not applied.

    However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="", when a pod’s label is set to a different node role, such as node-role.kubernetes.io/master="", can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles.

    You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts.

    1. Edit the Scheduler object:

      $ oc edit scheduler cluster
    2. Add the defaultNodeSelector field with the appropriate node selector:

      apiVersion: config.openshift.io/v1
      kind: Scheduler
      metadata:
        name: cluster
      spec:
        defaultNodeSelector: topology.kubernetes.io/region=us-east-1 (1)
      # ...
      1 This example node selector deploys pods on nodes in the us-east-1 region by default.
    3. Save the file to apply the changes.

You can now move infrastructure resources to the newly labeled infra nodes.

Additional resources
  • For information on how to configure project node selectors to avoid cluster-wide node selector key conflicts, see Project node selectors.

Creating a machine config pool for infrastructure machines

If you need infrastructure machines to have dedicated configurations, you must create an infra pool.

Procedure
  1. Add a label to the node you want to assign as the infra node with a specific label:

    $ oc label node <node_name> <label>
    $ oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=
  2. Create a machine config pool that contains both the worker role and your custom role as machine config selector:

    $ cat infra.mcp.yaml
    Example output
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfigPool
    metadata:
      name: infra
    spec:
      machineConfigSelector:
        matchExpressions:
          - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} (1)
      nodeSelector:
        matchLabels:
          node-role.kubernetes.io/infra: "" (2)
    1 Add the worker role and your custom role.
    2 Add the label you added to the node as a nodeSelector.

    Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool.

  3. After you have the YAML file, you can create the machine config pool:

    $ oc create -f infra.mcp.yaml
  4. Check the machine configs to ensure that the infrastructure configuration rendered successfully:

    $ oc get machineconfig
    Example output
    NAME                                                        GENERATEDBYCONTROLLER                      IGNITIONVERSION   CREATED
    00-master                                                   365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             31d
    00-worker                                                   365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             31d
    01-master-container-runtime                                 365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             31d
    01-master-kubelet                                           365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             31d
    01-worker-container-runtime                                 365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             31d
    01-worker-kubelet                                           365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             31d
    99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries   365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             31d
    99-master-ssh                                                                                          3.2.0             31d
    99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries   365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             31d
    99-worker-ssh                                                                                          3.2.0             31d
    rendered-infra-4e48906dca84ee702959c71a53ee80e7             365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             23m
    rendered-master-072d4b2da7f88162636902b074e9e28e            5b6fb8349a29735e48446d435962dec4547d3090   3.2.0             31d
    rendered-master-3e88ec72aed3886dec061df60d16d1af            02c07496ba0417b3e12b78fb32baf6293d314f79   3.2.0             31d
    rendered-master-419bee7de96134963a15fdf9dd473b25            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             17d
    rendered-master-53f5c91c7661708adce18739cc0f40fb            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             13d
    rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             7d3h
    rendered-master-dc7f874ec77fc4b969674204332da037            5b6fb8349a29735e48446d435962dec4547d3090   3.2.0             31d
    rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d            5b6fb8349a29735e48446d435962dec4547d3090   3.2.0             31d
    rendered-worker-2640531be11ba43c61d72e82dc634ce6            5b6fb8349a29735e48446d435962dec4547d3090   3.2.0             31d
    rendered-worker-4e48906dca84ee702959c71a53ee80e7            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             7d3h
    rendered-worker-4f110718fe88e5f349987854a1147755            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             17d
    rendered-worker-afc758e194d6188677eb837842d3b379            02c07496ba0417b3e12b78fb32baf6293d314f79   3.2.0             31d
    rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3            365c1cfd14de5b0e3b85e0fc815b0060f36ab955   3.2.0             13d

    You should see a new machine config, with the rendered-infra-* prefix.

  5. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra. Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes.

    After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration.

    1. Create a machine config:

      $ cat infra.mc.yaml
      Example output
      apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      metadata:
        name: 51-infra
        labels:
          machineconfiguration.openshift.io/role: infra (1)
      spec:
        config:
          ignition:
            version: 3.2.0
          storage:
            files:
            - path: /etc/infratest
              mode: 0644
              contents:
                source: data:,infra
      1 Add the label you added to the node as a nodeSelector.
    2. Apply the machine config to the infra-labeled nodes:

      $ oc create -f infra.mc.yaml
  6. Confirm that your new machine config pool is available:

    $ oc get mcp
    Example output
    NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    infra    rendered-infra-60e35c2e99f42d976e084fa94da4d0fc    True      False      False      1              1                   1                     0                      4m20s
    master   rendered-master-9360fdb895d4c131c7c4bebbae099c90   True      False      False      3              3                   3                     0                      91m
    worker   rendered-worker-60e35c2e99f42d976e084fa94da4d0fc   True      False      False      2              2                   2                     0                      91m

    In this example, a worker node was changed to an infra node.

Additional resources

Assigning machine set resources to infrastructure nodes

After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied.

However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control.

Binding infrastructure node workloads using taints and tolerations

If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it.

It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions.

Prerequisites
  • Configure additional MachineSet objects in your OKD cluster.

Procedure
  1. Add a taint to the infra node to prevent scheduling user workloads on it:

    1. Determine if the node has the taint:

      $ oc describe nodes <node_name>
      Sample output
      oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
      Name:               ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
      Roles:              worker
       ...
      Taints:             node-role.kubernetes.io/infra:NoSchedule
       ...

      This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the next step.

    2. If you have not configured a taint to prevent scheduling user workloads on it:

      $ oc adm taint nodes <node_name> <key>=<value>:<effect>

      For example:

      $ oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute

      You can alternatively apply the following YAML to add the taint:

      kind: Node
      apiVersion: v1
      metadata:
        name: <node_name>
        labels:
          ...
      spec:
        taints:
          - key: node-role.kubernetes.io/infra
            effect: NoExecute
            value: reserved
        ...

      This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule. Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.

      If a descheduler is used, pods violating node taints could be evicted from the cluster.

  2. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification:

    tolerations:
      - effect: NoExecute (1)
        key: node-role.kubernetes.io/infra (2)
        operator: Exists (3)
        value: reserved (4)
    1 Specify the effect that you added to the node.
    2 Specify the key that you added to the node.
    3 Specify the Exists Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node.
    4 Specify the value of the key-value pair taint that you added to the node.

    This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node.

    Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator.

  3. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details.

Additional resources

Moving resources to infrastructure machine sets

Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.

Moving the router

You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node.

Prerequisites
  • Configure additional machine sets in your OKD cluster.

Procedure
  1. View the IngressController custom resource for the router Operator:

    $ oc get ingresscontroller default -n openshift-ingress-operator -o yaml

    The command output resembles the following text:

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      creationTimestamp: 2019-04-18T12:35:39Z
      finalizers:
      - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
      generation: 1
      name: default
      namespace: openshift-ingress-operator
      resourceVersion: "11341"
      selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
      uid: 79509e05-61d6-11e9-bc55-02ce4781844a
    spec: {}
    status:
      availableReplicas: 2
      conditions:
      - lastTransitionTime: 2019-04-18T12:36:15Z
        status: "True"
        type: Available
      domain: apps.<cluster>.example.com
      endpointPublishingStrategy:
        type: LoadBalancerService
      selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
  2. Edit the ingresscontroller resource and change the nodeSelector to use the infra label:

    $ oc edit ingresscontroller default -n openshift-ingress-operator
      spec:
        nodePlacement:
          nodeSelector: (1)
            matchLabels:
              node-role.kubernetes.io/infra: ""
          tolerations:
          - effect: NoSchedule
            key: node-role.kubernetes.io/infra
            value: reserved
          - effect: NoExecute
            key: node-role.kubernetes.io/infra
            value: reserved
    1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration.
  3. Confirm that the router pod is running on the infra node.

    1. View the list of router pods and note the node name of the running pod:

      $ oc get pod -n openshift-ingress -o wide
      Example output
      NAME                              READY     STATUS        RESTARTS   AGE       IP           NODE                           NOMINATED NODE   READINESS GATES
      router-default-86798b4b5d-bdlvd   1/1      Running       0          28s       10.130.2.4   ip-10-0-217-226.ec2.internal   <none>           <none>
      router-default-955d875f4-255g8    0/1      Terminating   0          19h       10.129.2.4   ip-10-0-148-172.ec2.internal   <none>           <none>

      In this example, the running pod is on the ip-10-0-217-226.ec2.internal node.

    2. View the node status of the running pod:

      $ oc get node <node_name> (1)
      1 Specify the <node_name> that you obtained from the pod list.
      Example output
      NAME                          STATUS  ROLES         AGE   VERSION
      ip-10-0-217-226.ec2.internal  Ready   infra,worker  17h   v1.24.0

      Because the role list includes infra, the pod is running on the correct node.

Moving the default registry

You configure the registry Operator to deploy its pods to different nodes.

Prerequisites
  • Configure additional machine sets in your OKD cluster.

Procedure
  1. View the config/instance object:

    $ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
    Example output
    apiVersion: imageregistry.operator.openshift.io/v1
    kind: Config
    metadata:
      creationTimestamp: 2019-02-05T13:52:05Z
      finalizers:
      - imageregistry.operator.openshift.io/finalizer
      generation: 1
      name: cluster
      resourceVersion: "56174"
      selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
      uid: 36fd3724-294d-11e9-a524-12ffeee2931b
    spec:
      httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
      logging: 2
      managementState: Managed
      proxy: {}
      replicas: 1
      requests:
        read: {}
        write: {}
      storage:
        s3:
          bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
          region: us-east-1
    status:
    ...
  2. Edit the config/instance object:

    $ oc edit configs.imageregistry.operator.openshift.io/cluster
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              namespaces:
              - openshift-image-registry
              topologyKey: kubernetes.io/hostname
            weight: 100
      logLevel: Normal
      managementState: Managed
      nodeSelector: (1)
        node-role.kubernetes.io/infra: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        value: reserved
      - effect: NoExecute
        key: node-role.kubernetes.io/infra
        value: reserved
    1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
  3. Verify the registry pod has been moved to the infrastructure node.

    1. Run the following command to identify the node where the registry pod is located:

      $ oc get pods -o wide -n openshift-image-registry
    2. Confirm the node has the label you specified:

      $ oc describe node <node_name>

      Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list.

Moving the monitoring solution

The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map.

Procedure
  1. Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label:

    $ oc edit configmap cluster-monitoring-config -n openshift-monitoring
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |+
        alertmanagerMain:
          nodeSelector: (1)
            node-role.kubernetes.io/infra: ""
          tolerations:
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoSchedule
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoExecute
        prometheusK8s:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
          tolerations:
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoSchedule
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoExecute
        prometheusOperator:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
          tolerations:
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoSchedule
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoExecute
        k8sPrometheusAdapter:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
          tolerations:
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoSchedule
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoExecute
        kubeStateMetrics:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
          tolerations:
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoSchedule
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoExecute
        telemeterClient:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
          tolerations:
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoSchedule
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoExecute
        openshiftStateMetrics:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
          tolerations:
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoSchedule
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoExecute
        thanosQuerier:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
          tolerations:
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoSchedule
          - key: node-role.kubernetes.io/infra
            value: reserved
            effect: NoExecute
    1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
  2. Watch the monitoring pods move to the new machines:

    $ watch 'oc get pod -n openshift-monitoring -o wide'
  3. If a component has not moved to the infra node, delete the pod with this component:

    $ oc delete pod -n openshift-monitoring <pod>

    The component from the deleted pod is re-created on the infra node.

Moving logging resources

You can configure the Red Hat OpenShift Logging Operator to deploy the pods for logging components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Red Hat OpenShift Logging Operator pod from its installed location.

For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

Prerequisites
  • You have installed the Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator.

Procedure
  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    Example ClusterLogging CR
    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    # ...
    spec:
      logStore:
        elasticsearch:
          nodeCount: 3
          nodeSelector: (1)
            node-role.kubernetes.io/infra: ''
          tolerations:
          - effect: NoSchedule
            key: node-role.kubernetes.io/infra
            value: reserved
          - effect: NoExecute
            key: node-role.kubernetes.io/infra
            value: reserved
          redundancyPolicy: SingleRedundancy
          resources:
            limits:
              cpu: 500m
              memory: 16Gi
            requests:
              cpu: 500m
              memory: 16Gi
          storage: {}
        type: elasticsearch
      managementState: Managed
      visualization:
        kibana:
          nodeSelector: (1)
            node-role.kubernetes.io/infra: ''
          tolerations:
          - effect: NoSchedule
            key: node-role.kubernetes.io/infra
            value: reserved
          - effect: NoExecute
            key: node-role.kubernetes.io/infra
            value: reserved
          proxy:
            resources: null
          replicas: 1
          resources: null
        type: kibana
    # ...
    1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
Verification

To verify that a component has moved, you can use the oc get pod -o wide command.

For example:

  • You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node:

    $ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
    Example output
    NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
    kibana-5b8bdf44f9-ccpq9   2/2     Running   0          27s   10.129.2.18   ip-10-0-147-79.us-east-2.compute.internal   <none>           <none>
  • You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node:

    $ oc get nodes
    Example output
    NAME                                         STATUS   ROLES          AGE   VERSION
    ip-10-0-133-216.us-east-2.compute.internal   Ready    master         60m   v1.24.0
    ip-10-0-139-146.us-east-2.compute.internal   Ready    master         60m   v1.24.0
    ip-10-0-139-192.us-east-2.compute.internal   Ready    worker         51m   v1.24.0
    ip-10-0-139-241.us-east-2.compute.internal   Ready    worker         51m   v1.24.0
    ip-10-0-147-79.us-east-2.compute.internal    Ready    worker         51m   v1.24.0
    ip-10-0-152-241.us-east-2.compute.internal   Ready    master         60m   v1.24.0
    ip-10-0-139-48.us-east-2.compute.internal    Ready    infra          51m   v1.24.0

    Note that the node has a node-role.kubernetes.io/infra: '' label:

    $ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
    Example output
    kind: Node
    apiVersion: v1
    metadata:
      name: ip-10-0-139-48.us-east-2.compute.internal
      selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
      uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
      resourceVersion: '39083'
      creationTimestamp: '2020-04-13T19:07:55Z'
      labels:
        node-role.kubernetes.io/infra: ''
    ...
  • To move the Kibana pod, edit the ClusterLogging CR to add a node selector:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    # ...
    spec:
    # ...
      visualization:
        kibana:
          nodeSelector: (1)
            node-role.kubernetes.io/infra: ''
          proxy:
            resources: null
          replicas: 1
          resources: null
        type: kibana
    1 Add a node selector to match the label in the node specification.
  • After you save the CR, the current Kibana pod is terminated and new pod is deployed:

    $ oc get pods
    Example output
    NAME                                            READY   STATUS        RESTARTS   AGE
    cluster-logging-operator-84d98649c4-zb9g7       1/1     Running       0          29m
    elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg   2/2     Running       0          28m
    elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj   2/2     Running       0          28m
    elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78    2/2     Running       0          28m
    collector-42dzz                                 1/1     Running       0          28m
    collector-d74rq                                 1/1     Running       0          28m
    collector-m5vr9                                 1/1     Running       0          28m
    collector-nkxl7                                 1/1     Running       0          28m
    collector-pdvqb                                 1/1     Running       0          28m
    collector-tflh6                                 1/1     Running       0          28m
    kibana-5b8bdf44f9-ccpq9                         2/2     Terminating   0          4m11s
    kibana-7d85dcffc8-bfpfp                         2/2     Running       0          33s
  • The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node:

    $ oc get pod kibana-7d85dcffc8-bfpfp -o wide
    Example output
    NAME                      READY   STATUS        RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
    kibana-7d85dcffc8-bfpfp   2/2     Running       0          43s   10.131.0.22   ip-10-0-139-48.us-east-2.compute.internal   <none>           <none>
  • After a few moments, the original Kibana pod is removed.

    $ oc get pods
    Example output
    NAME                                            READY   STATUS    RESTARTS   AGE
    cluster-logging-operator-84d98649c4-zb9g7       1/1     Running   0          30m
    elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg   2/2     Running   0          29m
    elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj   2/2     Running   0          29m
    elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78    2/2     Running   0          29m
    collector-42dzz                                 1/1     Running   0          29m
    collector-d74rq                                 1/1     Running   0          29m
    collector-m5vr9                                 1/1     Running   0          29m
    collector-nkxl7                                 1/1     Running   0          29m
    collector-pdvqb                                 1/1     Running   0          29m
    collector-tflh6                                 1/1     Running   0          29m
    kibana-7d85dcffc8-bfpfp                         2/2     Running   0          62s

About the cluster autoscaler

The cluster autoscaler adjusts the size of an OKD cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace.

The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.

The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources.

Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to.

Every 10 seconds, the cluster autoscaler checks which nodes are unnecessary in the cluster and removes them. The cluster autoscaler considers a node for removal if the following conditions apply:

  • The node utilization is less than the node utilization level threshold for the cluster. The node utilization level is the sum of the requested resources divided by the allocated resources for the node. If you do not specify a value in the ClusterAutoscaler custom resource, the cluster autoscaler uses a default value of 0.5, which corresponds to 50% utilization.

  • The cluster autoscaler can move all pods running on the node to the other nodes. The Kubernetes scheduler is responsible for scheduling pods on the nodes.

  • The cluster autoscaler does not have scale down disabled annotation.

If the following types of pods are present on a node, the cluster autoscaler will not remove the node:

  • Pods with restrictive pod disruption budgets (PDBs).

  • Kube-system pods that do not run on the node by default.

  • Kube-system pods that do not have a PDB or have a PDB that is too restrictive.

  • Pods that are not backed by a controller object such as a deployment, replica set, or stateful set.

  • Pods with local storage.

  • Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on.

  • Unless they also have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation, pods that have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation.

For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62.

If you configure the cluster autoscaler, additional usage restrictions apply:

  • Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods.

  • Specify requests for your pods.

  • If you have to prevent pods from being deleted too quickly, configure appropriate PDBs.

  • Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure.

  • Do not run additional node group autoscalers, especially the ones offered by your cloud provider.

The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment’s or replica set’s number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes.

The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule "best-effort" pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available.

Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources.

Cluster autoscaling is supported for the platforms that have machine API available on it.

Cluster autoscaler resource definition

This ClusterAutoscaler resource definition shows the parameters and sample values for the cluster autoscaler.

apiVersion: "autoscaling.openshift.io/v1"
kind: "ClusterAutoscaler"
metadata:
  name: "default"
spec:
  podPriorityThreshold: -10 (1)
  resourceLimits:
    maxNodesTotal: 24 (2)
    cores:
      min: 8 (3)
      max: 128 (4)
    memory:
      min: 4 (5)
      max: 256 (6)
    gpus:
      - type: nvidia.com/gpu (7)
        min: 0 (8)
        max: 16 (9)
      - type: amd.com/gpu
        min: 0
        max: 4
  scaleDown: (10)
    enabled: true (11)
    delayAfterAdd: 10m (12)
    delayAfterDelete: 5m (13)
    delayAfterFailure: 30s (14)
    unneededTime: 5m (15)
    utilizationThreshold: "0.4" (16)
1 Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The podPriorityThreshold value is compared to the value of the PriorityClass that you assign to each pod.
2 Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your MachineAutoscaler resources.
3 Specify the minimum number of cores to deploy in the cluster.
4 Specify the maximum number of cores to deploy in the cluster.
5 Specify the minimum amount of memory, in GiB, in the cluster.
6 Specify the maximum amount of memory, in GiB, in the cluster.
7 Optional: Specify the type of GPU node to deploy. Only nvidia.com/gpu and amd.com/gpu are valid types.
8 Specify the minimum number of GPUs to deploy in the cluster.
9 Specify the maximum number of GPUs to deploy in the cluster.
10 In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including ns, us, ms, s, m, and h.
11 Specify whether the cluster autoscaler can remove unnecessary nodes.
12 Optional: Specify the period to wait before deleting a node after a node has recently been added. If you do not specify a value, the default value of 10m is used.
13 Optional: Specify the period to wait before deleting a node after a node has recently been deleted. If you do not specify a value, the default value of 0s is used.
14 Optional: Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of 3m is used.
15 Optional: Specify the period before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of 10m is used.
16 Optional: Specify the node utilization level below which an unnecessary node is eligible for deletion. The node utilization level is the sum of the requested resources divided by the allocated resources for the node, and must be a value greater than "0" but less than "1". If you do not specify a value, the cluster autoscaler uses a default value of "0.5", which corresponds to 50% utilization. This value must be expressed as a string.

When performing a scaling operation, the cluster autoscaler remains within the ranges set in the ClusterAutoscaler resource definition, such as the minimum and maximum number of cores to deploy or the amount of memory in the cluster. However, the cluster autoscaler does not correct the current values in your cluster to be within those ranges.

The minimum and maximum CPUs, memory, and GPU values are determined by calculating those resources on all nodes in the cluster, even if the cluster autoscaler does not manage the nodes. For example, the control plane nodes are considered in the total memory in the cluster, even though the cluster autoscaler does not manage the control plane nodes.

Deploying a cluster autoscaler

To deploy a cluster autoscaler, you create an instance of the ClusterAutoscaler resource.

Procedure
  1. Create a YAML file for a ClusterAutoscaler resource that contains the custom resource definition.

  2. Create the custom resource in the cluster by running the following command:

    $ oc create -f <filename>.yaml (1)
    1 <filename> is the name of the custom resource file.

About the machine autoscaler

The machine autoscaler adjusts the number of Machines in the machine sets that you deploy in an OKD cluster. You can scale both the default worker machine set and any other machine sets that you create. The machine autoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler resources, such as the minimum or maximum number of instances, are immediately applied to the machine set they target.

You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster.

Machine autoscaler resource definition

This MachineAutoscaler resource definition shows the parameters and sample values for the machine autoscaler.

apiVersion: "autoscaling.openshift.io/v1beta1"
kind: "MachineAutoscaler"
metadata:
  name: "worker-us-east-1a" (1)
  namespace: "openshift-machine-api"
spec:
  minReplicas: 1 (2)
  maxReplicas: 12 (3)
  scaleTargetRef: (4)
    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet (5)
    name: worker-us-east-1a (6)
1 Specify the machine autoscaler name. To make it easier to identify which machine set this machine autoscaler scales, specify or include the name of the machine set to scale. The machine set name takes the following form: <clusterid>-<machineset>-<region>.
2 Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, Azure, OpenStack, or vSphere, this value can be set to 0. For other providers, do not set this value to 0.

You can save on costs by setting this value to 0 for use cases such as running expensive or limited-usage hardware that is used for specialized workloads, or by scaling a machine set with extra large machines. The cluster autoscaler scales the machine set down to zero if the machines are not in use.

Do not set the spec.minReplicas value to 0 for the three compute machine sets that are created during the OKD installation process for an installer provisioned infrastructure.

3 Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified zone after it initiates cluster scaling. Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition is large enough to allow the machine autoscaler to deploy this number of machines.
4 In this section, provide values that describe the existing machine set to scale.
5 The kind parameter value is always MachineSet.
6 The name value must match the name of an existing machine set, as shown in the metadata.name parameter value.

Deploying a machine autoscaler

To deploy a machine autoscaler, you create an instance of the MachineAutoscaler resource.

Procedure
  1. Create a YAML file for a MachineAutoscaler resource that contains the custom resource definition.

  2. Create the custom resource in the cluster by running the following command:

    $ oc create -f <filename>.yaml (1)
    1 <filename> is the name of the custom resource file.

Configuring Linux cgroup

By default, OKD uses Linux control group version 2 (cgroup v2) in your cluster. You can switch to Linux control group version 1 (cgroup v1), if needed.

cgroup v2 is the next version of the kernel control group and offers multiple improvements. However, it can have some unwanted effects on your nodes.

You can configure whether your cluster uses cgroup v1 or cgroup v2 by editing the node.config object. Enabling the other version of cgroup in OKD disables the current cgroup controllers and hierarchies in your cluster.

Prerequisites
  • Have administrative privilege to a working OKD cluster.

Procedure
  1. Create a MachineConfig object file that identifies the kernel argument (for example, worker-cgroup-v1.yaml)

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker (1)
      name: worker-cgroup-v1 (2)
    spec:
      config:
        ignition:
          version: 3.2.0
      kernelArguments:
        - systemd.unified_cgroup_hierarchy=0 (3)
    1 Applies the new kernel argument only to worker nodes.
    2 Applies a name to the machine config.
    3 Configures cgroup v1 on the associated nodes.
  2. Create the new machine config:

    $ oc create -f 05-worker-cgroup-v1.yaml
  3. Check to see that the new machine config was added:

    $ oc get MachineConfig
    Example output
    NAME                                               GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
    00-master                                          52dd3ba6a9a527fc3ab42afac8d12b693534c8c9   3.2.0             33m
    00-worker                                          52dd3ba6a9a527fc3ab42afac8d12b693534c8c9   3.2.0             33m
    01-master-container-runtime                        52dd3ba6a9a527fc3ab42afac8d12b693534c8c9   3.2.0             33m
    01-master-kubelet                                  52dd3ba6a9a527fc3ab42afac8d12b693534c8c9   3.2.0             33m
    01-worker-container-runtime                        52dd3ba6a9a527fc3ab42afac8d12b693534c8c9   3.2.0             33m
    01-worker-kubelet                                  52dd3ba6a9a527fc3ab42afac8d12b693534c8c9   3.2.0             33m
    99-worker-cgroup-v1                                                                           3.2.0             105s
    99-master-generated-registries                     52dd3ba6a9a527fc3ab42afac8d12b693534c8c9   3.2.0             33m
    99-master-ssh                                                                                 3.2.0             40m
    99-worker-generated-registries                     52dd3ba6a9a527fc3ab42afac8d12b693534c8c9   3.2.0             33m
    99-worker-ssh                                                                                 3.2.0             40m
    rendered-master-23e785de7587df95a4b517e0647e5ab7   52dd3ba6a9a527fc3ab42afac8d12b693534c8c9   3.2.0             33m
    rendered-master-c5e92d98103061c4818cfcefcf462770   60746a843e7ef8855ae00f2ffcb655c53e0e8296   3.2.0             115s
    rendered-worker-5d596d9293ca3ea80c896a1191735bb1   52dd3ba6a9a527fc3ab42afac8d12b693534c8c9   3.2.0             33m
  4. Check the nodes:

    $ oc get nodes
    Example output
    NAME                           STATUS                     ROLES    AGE   VERSION
    ip-10-0-136-161.ec2.internal   Ready                      worker   28m   v1.25.0
    ip-10-0-136-243.ec2.internal   Ready                      master   34m   v1.25.0
    ip-10-0-141-105.ec2.internal   Ready,SchedulingDisabled   worker   28m   v1.25.0
    ip-10-0-142-249.ec2.internal   Ready                      master   34m   v1.25.0
    ip-10-0-153-11.ec2.internal    Ready                      worker   28m   v1.25.0
    ip-10-0-153-150.ec2.internal   Ready                      master   34m   v1.25.0

    You can see that The command disables scheduling on each worker node.

  5. Check that the sys/fs/cgroup/cgroup2fs file has been moved to the tmpfs file system:

    $ stat -c %T -f /sys/fs/cgroup
    Example output
    tmpfs

Enabling Technology Preview features using FeatureGates

You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the FeatureGate custom resource (CR).

Understanding feature gates

You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OKD features that are not enabled by default.

You can activate the following feature set by using the FeatureGate CR:

  • TechPreviewNoUpgrade. This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these tech preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters. Enabling this feature set cannot be undone and prevents minor version updates. This feature set is not recommended on production clusters.

    Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters.

    The following Technology Preview features are enabled by this feature set:

    • Microsoft Azure File CSI Driver Operator. Enables the provisioning of persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for Microsoft Azure File Storage.

    • CSI automatic migration. Enables automatic migration for supported in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers. Supported for:

      • Amazon Web Services (AWS) Elastic Block Storage (EBS)

      • Google Compute Engine Persistent Disk

      • Azure File

      • VMware vSphere

    • Cluster Cloud Controller Manager Operator. Enables the Cluster Cloud Controller Manager Operator rather than the in-tree cloud controller. Available as a Technology Preview for:

      • Alibaba Cloud

      • Amazon Web Services (AWS)

      • Google Cloud Platform (GCP)

      • IBM Cloud

      • Microsoft Azure

      • OpenStack

      • VMware vSphere

    • Shared resource CSI driver

    • CSI volume support for the OKD build system

    • Swap memory on nodes

    • Cluster API. Enables the integrated upstream Cluster API in OKD with the ClusterAPIEnabled feature gate. Available as a Technology Preview for:

      • Amazon Web Services (AWS)

      • Google Cloud Platform (GCP)

    • Managing alerting rules for core platform monitoring

Enabling feature sets using the web console

You can use the OKD web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR).

Procedure

To enable feature sets:

  1. In the OKD web console, switch to the AdministrationCustom Resource Definitions page.

  2. On the Custom Resource Definitions page, click FeatureGate.

  3. On the Custom Resource Definition Details page, click the Instances tab.

  4. Click the cluster feature gate, then click the YAML tab.

  5. Edit the cluster instance to add specific feature sets:

    Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters.

    Sample Feature Gate custom resource
    apiVersion: config.openshift.io/v1
    kind: FeatureGate
    metadata:
      name: cluster (1)
    # ...
    spec:
      featureSet: TechPreviewNoUpgrade (2)
    1 The name of the FeatureGate CR must be cluster.
    2 Add the feature set that you want to enable:
    • TechPreviewNoUpgrade enables specific Technology Preview features.

    After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied.

Verification

You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state.

  1. From the Administrator perspective in the web console, navigate to ComputeNodes.

  2. Select a node.

  3. In the Node details page, click Terminal.

  4. In the terminal window, change your root directory to /host:

    sh-4.2# chroot /host
  5. View the kubelet.conf file:

    sh-4.2# cat /etc/kubernetes/kubelet.conf
    Sample output
    # ...
    featureGates:
      InsightsOperatorPullingSCA: true,
      LegacyNodeRoleBehavior: false
    # ...

    The features that are listed as true are enabled on your cluster.

    The features listed vary depending upon the OKD version.

Enabling feature sets using the CLI

You can use the OpenShift CLI (oc) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR).

Prerequisites
  • You have installed the OpenShift CLI (oc).

Procedure

To enable feature sets:

  1. Edit the FeatureGate CR named cluster:

    $ oc edit featuregate cluster

    Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters.

    Sample FeatureGate custom resource
    apiVersion: config.openshift.io/v1
    kind: FeatureGate
    metadata:
      name: cluster (1)
    # ...
    spec:
      featureSet: TechPreviewNoUpgrade (2)
    1 The name of the FeatureGate CR must be cluster.
    2 Add the feature set that you want to enable:
    • TechPreviewNoUpgrade enables specific Technology Preview features.

    After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied.

Verification

You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state.

  1. From the Administrator perspective in the web console, navigate to ComputeNodes.

  2. Select a node.

  3. In the Node details page, click Terminal.

  4. In the terminal window, change your root directory to /host:

    sh-4.2# chroot /host
  5. View the kubelet.conf file:

    sh-4.2# cat /etc/kubernetes/kubelet.conf
    Sample output
    # ...
    featureGates:
      InsightsOperatorPullingSCA: true,
      LegacyNodeRoleBehavior: false
    # ...

    The features that are listed as true are enabled on your cluster.

    The features listed vary depending upon the OKD version.

etcd tasks

Back up etcd, enable or disable etcd encryption, or defragment etcd data.

About etcd encryption

By default, etcd data is not encrypted in OKD. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.

When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:

  • Secrets

  • Config maps

  • Routes

  • OAuth access tokens

  • OAuth authorize tokens

When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys to restore from an etcd backup.

Etcd encryption only encrypts values, not keys. Resource types, namespaces, and object names are unencrypted.

If etcd encryption is enabled during a backup, the static_kuberesources_<datetimestamp>.tar.gz file contains the encryption keys for the etcd snapshot. For security reasons, store this file separately from the etcd snapshot. However, this file is required to restore a previous state of etcd from the respective etcd snapshot.

Enabling etcd encryption

You can enable etcd encryption to encrypt sensitive resources in your cluster.

Do not back up etcd resources until the initial encryption process is completed. If the encryption process is not completed, the backup might be only partially encrypted.

After you enable etcd encryption, several changes can occur:

  • The etcd encryption might affect the memory consumption of a few resources.

  • You might notice a transient affect on backup performance because the leader must serve the backup.

  • A disk I/O can affect the node that receives the backup state.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role.

Procedure
  1. Modify the APIServer object:

    $ oc edit apiserver
  2. Set the encryption field type to aescbc:

    spec:
      encryption:
        type: aescbc (1)
    1 The aescbc type means that AES-CBC with PKCS#7 padding and a 32 byte key is used to perform the encryption.
  3. Save the file to apply the changes.

    The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.

  4. Verify that etcd encryption was successful.

    1. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted:

      $ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows EncryptionCompleted upon successful encryption:

      EncryptionCompleted
      All resources encrypted: routes.route.openshift.io

      If the output shows EncryptionInProgress, encryption is still in progress. Wait a few minutes and try again.

    2. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully encrypted:

      $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows EncryptionCompleted upon successful encryption:

      EncryptionCompleted
      All resources encrypted: secrets, configmaps

      If the output shows EncryptionInProgress, encryption is still in progress. Wait a few minutes and try again.

    3. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted:

      $ oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows EncryptionCompleted upon successful encryption:

      EncryptionCompleted
      All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io

      If the output shows EncryptionInProgress, encryption is still in progress. Wait a few minutes and try again.

Disabling etcd encryption

You can disable encryption of etcd data in your cluster.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role.

Procedure
  1. Modify the APIServer object:

    $ oc edit apiserver
  2. Set the encryption field type to identity:

    spec:
      encryption:
        type: identity (1)
    1 The identity type is the default value and means that no encryption is performed.
  3. Save the file to apply the changes.

    The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.

  4. Verify that etcd decryption was successful.

    1. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully decrypted:

      $ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows DecryptionCompleted upon successful decryption:

      DecryptionCompleted
      Encryption mode set to identity and everything is decrypted

      If the output shows DecryptionInProgress, decryption is still in progress. Wait a few minutes and try again.

    2. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully decrypted:

      $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows DecryptionCompleted upon successful decryption:

      DecryptionCompleted
      Encryption mode set to identity and everything is decrypted

      If the output shows DecryptionInProgress, decryption is still in progress. Wait a few minutes and try again.

    3. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted:

      $ oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows DecryptionCompleted upon successful decryption:

      DecryptionCompleted
      Encryption mode set to identity and everything is decrypted

      If the output shows DecryptionInProgress, decryption is still in progress. Wait a few minutes and try again.

Backing up etcd data

Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.

Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have checked whether the cluster-wide proxy is enabled.

    You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml. The proxy is enabled if the httpProxy, httpsProxy, and noProxy fields have values set.

Procedure
  1. Start a debug session for a control plane node:

    $ oc debug node/<node_name>
  2. Change your root directory to /host:

    sh-4.2# chroot /host
  3. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY, HTTP_PROXY, and HTTPS_PROXY environment variables.

  4. Run the cluster-backup.sh script and pass in the location to save the backup to.

    The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command.

    sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup
    Example script output
    found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6
    found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7
    found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6
    found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3
    ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1
    etcdctl version: 3.4.14
    API version: 3.4
    {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"}
    {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
    {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"}
    {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
    {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459}
    {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"}
    Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db
    {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336}
    snapshot db and kube resources are successfully saved to /home/core/assets/backup

    In this example, two files are created in the /home/core/assets/backup/ directory on the control plane host:

    • snapshot_<datetimestamp>.db: This file is the etcd snapshot. The cluster-backup.sh script confirms its validity.

    • static_kuberesources_<datetimestamp>.tar.gz: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.

      If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot.

      Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.

Defragmenting etcd data

For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.

Monitor these key metrics:

  • etcd_server_quota_backend_bytes, which is the current quota limit

  • etcd_mvcc_db_total_size_in_use_in_bytes, which indicates the actual database usage after a history compaction

  • etcd_mvcc_db_total_size_in_bytes, which shows the database size, including free space waiting for defragmentation

Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.

History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.

Defragmentation occurs automatically, but you can also trigger it manually.

Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user.

Automatic defragmentation

The etcd Operator automatically defragments disks. No manual intervention is needed.

Verify that the defragmentation process is successful by viewing one of these logs:

  • etcd logs

  • cluster-etcd-operator pod

  • operator status error log

Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the next running instance or the component resumes work again after the restart.

Example log output for successful defragmentation
etcd member has been defragmented: <member_name>, memberID: <member_id>
Example log output for unsuccessful defragmentation
failed defrag on member: <member_name>, memberID: <member_id>: <error_message>

Manual defragmentation

A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases:

  • When etcd uses more than 50% of its available space for more than 10 minutes

  • When etcd is actively using less than 50% of its total database size for more than 10 minutes

You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024

Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.

Follow this procedure to defragment etcd data on each etcd member.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

Procedure
  1. Determine which etcd member is the leader, because the leader should be defragmented last.

    1. Get the list of etcd pods:

      $ oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
      Example output
      etcd-ip-10-0-159-225.example.redhat.com                3/3     Running     0          175m   10.0.159.225   ip-10-0-159-225.example.redhat.com   <none>           <none>
      etcd-ip-10-0-191-37.example.redhat.com                 3/3     Running     0          173m   10.0.191.37    ip-10-0-191-37.example.redhat.com    <none>           <none>
      etcd-ip-10-0-199-170.example.redhat.com                3/3     Running     0          176m   10.0.199.170   ip-10-0-199-170.example.redhat.com   <none>           <none>
    2. Choose a pod and run the following command to determine which etcd member is the leader:

      $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table
      Example output
      Defaulting container name to etcdctl.
      Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod.
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |  https://10.0.191.37:2379 | 251cd44483d811c3 |   3.4.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.159.225:2379 | 264c7c58ecbdabee |   3.4.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.199.170:2379 | 9ac311f93915cc79 |   3.4.9 |  104 MB |      true |      false |         7 |      91624 |              91624 |        |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

      Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com.

  2. Defragment an etcd member.

    1. Connect to the running etcd container, passing in the name of a pod that is not the leader:

      $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
    2. Unset the ETCDCTL_ENDPOINTS environment variable:

      sh-4.4# unset ETCDCTL_ENDPOINTS
    3. Defragment the etcd member:

      sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
      Example output
      Finished defragmenting etcd member[https://localhost:2379]

      If a timeout error occurs, increase the value for --command-timeout until the command succeeds.

    4. Verify that the database size was reduced:

      sh-4.4# etcdctl endpoint status -w table --cluster
      Example output
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |  https://10.0.191.37:2379 | 251cd44483d811c3 |   3.4.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.159.225:2379 | 264c7c58ecbdabee |   3.4.9 |   41 MB |     false |      false |         7 |      91624 |              91624 |        | (1)
      | https://10.0.199.170:2379 | 9ac311f93915cc79 |   3.4.9 |  104 MB |      true |      false |         7 |      91624 |              91624 |        |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

      This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.

    5. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.

      Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.

  3. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them.

    1. Check if there are any NOSPACE alarms:

      sh-4.4# etcdctl alarm list
      Example output
      memberID:12345678912345678912 alarm:NOSPACE
    2. Clear the alarms:

      sh-4.4# etcdctl alarm disarm

Restoring to a previous cluster state

You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.

If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure.

When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.7.2 cluster must use an etcd backup that was taken from 4.7.2.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation.

  • A healthy control plane host to use as the recovery host.

  • SSH access to control plane hosts.

  • A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz.

For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one.

Procedure
  1. Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.

  2. Establish SSH connectivity to each of the control plane nodes, including the recovery host.

    kube-apiserver becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.

    If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.

  3. Copy the etcd backup directory to the recovery control plane host.

    This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host.

  4. Stop the static pods on any other control plane nodes.

    You do not need to stop the static pods on the recovery host.

    1. Access a control plane host that is not the recovery host.

    2. Move the existing etcd pod file out of the kubelet manifest directory by running:

      $ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
    3. Verify that the etcd pods are stopped by using:

      $ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"

      If the output of this command is not empty, wait a few minutes and check again.

    4. Move the existing kube-apiserver file out of the kubelet manifest directory by running:

      $ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
    5. Verify that the kube-apiserver containers are stopped by running:

      $ sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard"

      If the output of this command is not empty, wait a few minutes and check again.

    6. Move the existing kube-controller-manager file out of the kubelet manifest directory by using:

      $ sudo mv /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp
    7. Verify that the kube-controller-manager containers are stopped by running:

      $ sudo crictl ps | grep kube-controller-manager | egrep -v "operator|guard"

      If the output of this command is not empty, wait a few minutes and check again.

    8. Move the existing kube-scheduler file out of the kubelet manifest directory by using:

      $ sudo mv /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp
    9. Verify that the kube-scheduler containers are stopped by using:

      $ sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard"

      If the output of this command is not empty, wait a few minutes and check again.

    10. Move the etcd data directory to a different location with the following example:

      $ sudo mv /var/lib/etcd/ /tmp
    11. If the /etc/kubernetes/manifests/keepalived.yaml file exists, follow these steps:

      1. Move the /etc/kubernetes/manifests/keepalived.yaml file out of the kubelet manifest directory:

        $ sudo mv /etc/kubernetes/manifests/keepalived.yaml /tmp
      2. Verify that any containers managed by the keepalived daemon are stopped:

        $ sudo crictl ps --name keepalived

        The output of this command should be empty. If it is not empty, wait a few minutes and check again.

      3. Check if the control plane has any Virtual IPs (VIPs) assigned to it:

        $ ip -o address | egrep '<api_vip>|<ingress_vip>'
      4. For each reported VIP, run the following command to remove it:

        $ sudo ip address del <reported_vip> dev <reported_vip_device>
    12. Repeat this step on each of the other control plane hosts that is not the recovery host.

  5. Access the recovery control plane host.

  6. If the keepalived daemon is in use, verify that the recovery control plane node owns the VIP:

    $ ip -o address | grep <api_vip>

    The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly.

  7. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY, HTTP_PROXY, and HTTPS_PROXY environment variables.

    You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml. The proxy is enabled if the httpProxy, httpsProxy, and noProxy fields have values set.

  8. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory:

    $ sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup
    Example script output
    ...stopping kube-scheduler-pod.yaml
    ...stopping kube-controller-manager-pod.yaml
    ...stopping etcd-pod.yaml
    ...stopping kube-apiserver-pod.yaml
    Waiting for container etcd to stop
    .complete
    Waiting for container etcdctl to stop
    .............................complete
    Waiting for container etcd-metrics to stop
    complete
    Waiting for container kube-controller-manager to stop
    complete
    Waiting for container kube-apiserver to stop
    ..........................................................................................complete
    Waiting for container kube-scheduler to stop
    complete
    Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup
    starting restore-etcd static pod
    starting kube-apiserver-pod.yaml
    static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml
    starting kube-controller-manager-pod.yaml
    static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml
    starting kube-scheduler-pod.yaml
    static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml

    The cluster-restore.sh script must show that etcd, kube-apiserver, kube-controller-manager, and kube-scheduler pods are stopped and then started at the end of the restore process.

    The restore process can cause nodes to enter the NotReady state if the node certificates were updated after the last etcd backup.

  9. Check the nodes to ensure they are in the Ready state.

    1. Run the following command:

      $ oc get nodes -w
      Sample output
      NAME                STATUS  ROLES          AGE     VERSION
      host-172-25-75-28   Ready   master         3d20h   v1.24.0
      host-172-25-75-38   Ready   infra,worker   3d20h   v1.24.0
      host-172-25-75-40   Ready   master         3d20h   v1.24.0
      host-172-25-75-65   Ready   master         3d20h   v1.24.0
      host-172-25-75-74   Ready   infra,worker   3d20h   v1.24.0
      host-172-25-75-79   Ready   worker         3d20h   v1.24.0
      host-172-25-75-86   Ready   worker         3d20h   v1.24.0
      host-172-25-75-98   Ready   infra,worker   3d20h   v1.24.0

      It can take several minutes for all nodes to report their state.

    2. If any nodes are in the NotReady state, log in to the nodes and remove all of the PEM files from the /var/lib/kubelet/pki directory on each node. You can SSH into the nodes or use the terminal window in the web console.

      $  ssh -i <ssh-key-path> core@<master-hostname>
      Sample pki directory
      sh-4.4# pwd
      /var/lib/kubelet/pki
      sh-4.4# ls
      kubelet-client-2022-04-28-11-24-09.pem  kubelet-server-2022-04-28-11-24-15.pem
      kubelet-client-current.pem              kubelet-server-current.pem
  10. Restart the kubelet service on all control plane hosts.

    1. From the recovery host, run:

      $ sudo systemctl restart kubelet.service
    2. Repeat this step on all other control plane hosts.

  11. Approve the pending Certificate Signing Requests (CSRs):

    Clusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step.

    1. Get the list of current CSRs by running:

      $ oc get csr
      Example output
      NAME        AGE    SIGNERNAME                                    REQUESTOR                                                                   CONDITION
      csr-2s94x   8m3s   kubernetes.io/kubelet-serving                 system:node:<node_name>                                                     Pending (1)
      csr-4bd6t   8m3s   kubernetes.io/kubelet-serving                 system:node:<node_name>                                                     Pending (1)
      csr-4hl85   13m    kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending (2)
      csr-zhhhp   3m8s   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending (2)
      ...
      1 A pending kubelet service

CSR (for user-provisioned installations). <2> A pending node-bootstrapper CSR.

  1. Review the details of a CSR to verify that it is valid by running:

    $ oc describe csr <csr_name> (1)
    1 <csr_name> is the name of a CSR from the list of current CSRs.
  2. Approve each valid node-bootstrapper CSR by running:

    $ oc adm certificate approve <csr_name>
  3. For user-provisioned installations, approve each valid kubelet service CSR by running:

    $ oc adm certificate approve <csr_name>
    1. Verify that the single member control plane has started successfully.

  4. From the recovery host, verify that the etcd container is running by using:

    $ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
    Example output
    3ad41b7908e32       36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009                                                         About a minute ago   Running             etcd                                          0                   7c05f8af362f0
  5. From the recovery host, verify that the etcd pod is running by using:

    $ oc -n openshift-etcd get pods -l k8s-app=etcd
    Example output
    NAME                                             READY   STATUS      RESTARTS   AGE
    etcd-ip-10-0-143-125.ec2.internal                1/1     Running     1          2m47s

    If the status is Pending, or the output lists more than one running etcd pod, wait a few minutes and check again.

    1. If you are using the OVNKubernetes network plugin, you must restart ovnkube-controlplane pods.

  6. Delete all of the ovnkube-controlplane pods by running:

    $ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane
  7. Verify that all of the ovnkube-controlplane pods were redeployed by using:

    $ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane
    1. Verify that the Cluster Network Operator (CNO) redeploys the OVN-Kubernetes control plane and that it no longer references the non-recovery controller IP addresses. To verify this result, regularly check the output of the following command. Wait until it returns an empty result before you proceed to restart the Open Virtual Network (OVN) Kubernetes pods on all of the hosts in the next step.

      $ oc -n openshift-ovn-kubernetes get ds/ovnkube-master -o yaml | grep -E '<non-recovery_controller_ip_1>|<non-recovery_controller_ip_2>'

      It can take at least 5-10 minutes for the OVN-Kubernetes control plane to be redeployed and the previous command to return empty output.

    2. Restart the Open Virtual Network (OVN) Kubernetes pods on all the hosts.

      Validating and mutating admission webhooks can reject pods. If you add any additional webhooks with the failurePolicy set to Fail, then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again.

      Alternatively, you can temporarily set the failurePolicy to Ignore while restoring the cluster state. After the cluster state is restored successfully, you can set the failurePolicy to Fail.

  8. Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run:

    $ sudo rm -f /var/lib/ovn/etc/*.db
  9. Delete all OVN-Kubernetes control plane pods by running the following command:

    $ oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes
  10. Ensure that any OVN-Kubernetes control plane pods are deployed again and are in a Running state by running the following command:

    $ oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes
    Example output
    NAME                   READY   STATUS    RESTARTS   AGE
    ovnkube-master-nb24h   4/4     Running   0          48s
  11. Verify that the ovnkube-node pod is running again with:

    $ oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete $p -n openshift-ovn-kubernetes ; done
  12. Ensure that all the ovnkube-node pods are deployed again and are in a Running state by running the following command:

    $ oc get  pods -n openshift-ovn-kubernetes | grep ovnkube-node
    1. Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and etcd automatically scales up.

      • If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal".

        Do not delete and re-create the machine for the recovery host.

      • If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps:

        Do not delete and re-create the machine for the recovery host.

        For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node".

  13. Obtain the machine for one of the lost control plane hosts.

    In a terminal that has access to the cluster as a cluster-admin user, run the following command:

    $ oc get machines -n openshift-machine-api -o wide

    Example output:

    NAME                                        PHASE     TYPE        REGION      ZONE         AGE     NODE                           PROVIDERID                              STATE
    clustername-8qw5l-master-0                  Running   m4.xlarge   us-east-1   us-east-1a   3h37m   ip-10-0-131-183.ec2.internal   aws:///us-east-1a/i-0ec2782f8287dfb7e   stopped (1)
    clustername-8qw5l-master-1                  Running   m4.xlarge   us-east-1   us-east-1b   3h37m   ip-10-0-143-125.ec2.internal   aws:///us-east-1b/i-096c349b700a19631   running
    clustername-8qw5l-master-2                  Running   m4.xlarge   us-east-1   us-east-1c   3h37m   ip-10-0-154-194.ec2.internal    aws:///us-east-1c/i-02626f1dba9ed5bba  running
    clustername-8qw5l-worker-us-east-1a-wbtgd   Running   m4.large    us-east-1   us-east-1a   3h28m   ip-10-0-129-226.ec2.internal   aws:///us-east-1a/i-010ef6279b4662ced   running
    clustername-8qw5l-worker-us-east-1b-lrdxb   Running   m4.large    us-east-1   us-east-1b   3h28m   ip-10-0-144-248.ec2.internal   aws:///us-east-1b/i-0cb45ac45a166173b   running
    clustername-8qw5l-worker-us-east-1c-pkg26   Running   m4.large    us-east-1   us-east-1c   3h28m   ip-10-0-170-181.ec2.internal   aws:///us-east-1c/i-06861c00007751b0a   running
    1 This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal.
  14. Save the machine configuration to a file on your file system by running:

    $ oc get machine clustername-8qw5l-master-0 \ (1)
        -n openshift-machine-api \
        -o yaml \
        > new-master-machine.yaml
    1 Specify the name of the control plane machine for the lost control plane host.
  15. Edit the new-master-machine.yaml file that was created in the previous step to assign a new name and remove unnecessary fields.

    1. Remove the entire status section by running:

      status:
        addresses:
        - address: 10.0.131.183
          type: InternalIP
        - address: ip-10-0-131-183.ec2.internal
          type: InternalDNS
        - address: ip-10-0-131-183.ec2.internal
          type: Hostname
        lastUpdated: "2020-04-20T17:44:29Z"
        nodeRef:
          kind: Node
          name: ip-10-0-131-183.ec2.internal
          uid: acca4411-af0d-4387-b73e-52b2484295ad
        phase: Running
        providerStatus:
          apiVersion: awsproviderconfig.openshift.io/v1beta1
          conditions:
          - lastProbeTime: "2020-04-20T16:53:50Z"
            lastTransitionTime: "2020-04-20T16:53:50Z"
            message: machine successfully created
            reason: MachineCreationSucceeded
            status: "True"
            type: MachineCreation
          instanceId: i-0fdb85790d76d0c3f
          instanceState: stopped
          kind: AWSMachineProviderStatus
    2. Change the metadata.name field to a new name by running:

      It is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example, clustername-8qw5l-master-0 is changed to clustername-8qw5l-master-3:

      apiVersion: machine.openshift.io/v1beta1
      kind: Machine
      metadata:
        ...
        name: clustername-8qw5l-master-3
        ...
    3. Remove the spec.providerID field by running:

      providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
    4. Remove the metadata.annotations and metadata.generation fields by running:

      annotations:
        machine.openshift.io/instance-state: running
      ...
      generation: 2
    5. Remove the metadata.resourceVersion and metadata.uid fields by running:

      resourceVersion: "13291"
      uid: a282eb70-40a2-4e89-8009-d05dd420d31a
  16. Delete the machine of the lost control plane host by running:

    $ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 (1)
    1 Specify the name of the control plane machine for the lost control plane host.
  17. Verify that the machine was deleted by running:

    $ oc get machines -n openshift-machine-api -o wide

    Example output:

    NAME                                        PHASE     TYPE        REGION      ZONE         AGE     NODE                           PROVIDERID                              STATE
    clustername-8qw5l-master-1                  Running   m4.xlarge   us-east-1   us-east-1b   3h37m   ip-10-0-143-125.ec2.internal   aws:///us-east-1b/i-096c349b700a19631   running
    clustername-8qw5l-master-2                  Running   m4.xlarge   us-east-1   us-east-1c   3h37m   ip-10-0-154-194.ec2.internal   aws:///us-east-1c/i-02626f1dba9ed5bba  running
    clustername-8qw5l-worker-us-east-1a-wbtgd   Running   m4.large    us-east-1   us-east-1a   3h28m   ip-10-0-129-226.ec2.internal   aws:///us-east-1a/i-010ef6279b4662ced   running
    clustername-8qw5l-worker-us-east-1b-lrdxb   Running   m4.large    us-east-1   us-east-1b   3h28m   ip-10-0-144-248.ec2.internal   aws:///us-east-1b/i-0cb45ac45a166173b   running
    clustername-8qw5l-worker-us-east-1c-pkg26   Running   m4.large    us-east-1   us-east-1c   3h28m   ip-10-0-170-181.ec2.internal   aws:///us-east-1c/i-06861c00007751b0a   running
  18. Create a machine by using the new-master-machine.yaml file by running:

    $ oc apply -f new-master-machine.yaml
  19. Verify that the new machine has been created by running:

    $ oc get machines -n openshift-machine-api -o wide

    Example output:

    NAME                                        PHASE          TYPE        REGION      ZONE         AGE     NODE                           PROVIDERID                              STATE
    clustername-8qw5l-master-1                  Running        m4.xlarge   us-east-1   us-east-1b   3h37m   ip-10-0-143-125.ec2.internal   aws:///us-east-1b/i-096c349b700a19631   running
    clustername-8qw5l-master-2                  Running        m4.xlarge   us-east-1   us-east-1c   3h37m   ip-10-0-154-194.ec2.internal    aws:///us-east-1c/i-02626f1dba9ed5bba  running
    clustername-8qw5l-master-3                  Provisioning   m4.xlarge   us-east-1   us-east-1a   85s     ip-10-0-173-171.ec2.internal    aws:///us-east-1a/i-015b0888fe17bc2c8  running (1)
    clustername-8qw5l-worker-us-east-1a-wbtgd   Running        m4.large    us-east-1   us-east-1a   3h28m   ip-10-0-129-226.ec2.internal   aws:///us-east-1a/i-010ef6279b4662ced   running
    clustername-8qw5l-worker-us-east-1b-lrdxb   Running        m4.large    us-east-1   us-east-1b   3h28m   ip-10-0-144-248.ec2.internal   aws:///us-east-1b/i-0cb45ac45a166173b   running
    clustername-8qw5l-worker-us-east-1c-pkg26   Running        m4.large    us-east-1   us-east-1c   3h28m   ip-10-0-170-181.ec2.internal   aws:///us-east-1c/i-06861c00007751b0a   running
    1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running.

    It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.

  20. Repeat these steps for each lost control plane host that is not the recovery host.

    1. Turn off the quorum guard by entering:

      $ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'

      This command ensures that you can successfully re-create secrets and roll out the static pods.

    2. In a separate terminal window within the recovery host, export the recovery kubeconfig file by running:

      $ export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig
    3. Force etcd redeployment.

      In the same terminal window where you exported the recovery kubeconfig file, run:

      $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge (1)
      1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended.

      When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.

    4. Turn the quorum guard back on by entering:

      $ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
    5. You can verify that the unsupportedConfigOverrides section is removed from the object by running:

      $ oc get etcd/cluster -oyaml
    6. Verify all nodes are updated to the latest revision.

      In a terminal that has access to the cluster as a cluster-admin user, run:

      $ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      AllNodesAtLatestRevision
      3 nodes are at revision 7 (1)
      
      1 In this example, the latest revision number is 7.

      If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

    7. After etcd is redeployed, force new rollouts for the control plane. kube-apiserver will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.

      In a terminal that has access to the cluster as a cluster-admin user, run:

  21. Force a new rollout for kube-apiserver:

    $ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

    Verify all nodes are updated to the latest revision.

    $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

    Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

    AllNodesAtLatestRevision
    3 nodes are at revision 7 (1)
    
    1 In this example, the latest revision number is 7.

    If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

  22. Force a new rollout for the Kubernetes controller manager by running the following command:

    $ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

    Verify all nodes are updated to the latest revision by running:

    $ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

    Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

    AllNodesAtLatestRevision
    3 nodes are at revision 7 (1)
    
    1 In this example, the latest revision number is 7.

    If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

  23. Force a new rollout for the kube-scheduler by running:

    $ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

    Verify all nodes are updated to the latest revision by using:

    $ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

    Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

    AllNodesAtLatestRevision
    3 nodes are at revision 7 (1)
    
    1 In this example, the latest revision number is 7.

    If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

    1. Verify that all control plane hosts have started and joined the cluster.

      In a terminal that has access to the cluster as a cluster-admin user, run the following command:

      $ oc -n openshift-etcd get pods -l k8s-app=etcd
      Example output
      etcd-ip-10-0-143-125.ec2.internal                2/2     Running     0          9h
      etcd-ip-10-0-154-194.ec2.internal                2/2     Running     0          9h
      etcd-ip-10-0-173-171.ec2.internal                2/2     Running     0          9h

To ensure that all workloads return to normal operation following a recovery procedure, restart each pod that stores kube-apiserver information. This includes OKD components such as routers, Operators, and third-party components.

On completion of the previous procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted.

Consider using the system:admin kubeconfig file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig

Issue the following command to display your authenticated user name:

$ oc whoami

Issues and workarounds for restoring a persistent storage state

If your OKD cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OKD is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.

The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OKD cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa.

The following are some example scenarios that produce an out-of-date status:

  • MySQL database is running in a pod backed up by a PV object. Restoring OKD from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.

  • Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OKD is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.

  • Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.

  • A device is removed or renamed from OKD nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist.

    To fix this problem, an administrator must:

    1. Manually remove the PVs with invalid devices.

    2. Remove symlinks from respective nodes.

    3. Delete LocalVolume or LocalVolumeSet objects (see StorageConfiguring persistent storagePersistent storage using local volumesDeleting the Local Storage Operator Resources).

Pod disruption budgets

Understand and configure pod disruption budgets.

Understanding how to use pod disruption budgets to specify the number of pods that must be up

A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance.

PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures).

A PodDisruptionBudget object’s configuration consists of the following key parts:

  • A label selector, which is a label query over a set of pods.

  • An availability level, which specifies the minimum number of pods that must be available simultaneously, either:

    • minAvailable is the number of pods must always be available, even during a disruption.

    • maxUnavailable is the number of pods can be unavailable during a disruption.

Available refers to the number of pods that has condition Ready=True. Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services.

A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained.

You can check for pod disruption budgets across all projects with the following:

$ oc get poddisruptionbudget --all-namespaces
Example output
NAMESPACE                              NAME                                    MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
openshift-apiserver                    openshift-apiserver-pdb                 N/A             1                 1                     121m
openshift-cloud-controller-manager     aws-cloud-controller-manager            1               N/A               1                     125m
openshift-cloud-credential-operator    pod-identity-webhook                    1               N/A               1                     117m
openshift-cluster-csi-drivers          aws-ebs-csi-driver-controller-pdb       N/A             1                 1                     121m
openshift-cluster-storage-operator     csi-snapshot-controller-pdb             N/A             1                 1                     122m
openshift-cluster-storage-operator     csi-snapshot-webhook-pdb                N/A             1                 1                     122m
openshift-console                      console                                 N/A             1                 1                     116m
#...

The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted.

Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements.

Specifying the number of pods that must be up with pod disruption budgets

You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time.

Procedure

To configure a pod disruption budget:

  1. Create a YAML file with the an object definition similar to the following:

    apiVersion: policy/v1 (1)
    kind: PodDisruptionBudget
    metadata:
      name: my-pdb
    spec:
      minAvailable: 2  (2)
      selector:  (3)
        matchLabels:
          name: my-pod
    1 PodDisruptionBudget is part of the policy/v1 API group.
    2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20%.
    3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this paramter blank, for example selector {}, to select all pods in the project.

    Or:

    apiVersion: policy/v1 (1)
    kind: PodDisruptionBudget
    metadata:
      name: my-pdb
    spec:
      maxUnavailable: 25% (2)
      selector: (3)
        matchLabels:
          name: my-pod
    1 PodDisruptionBudget is part of the policy/v1 API group.
    2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20%.
    3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this paramter blank, for example selector {}, to select all pods in the project.
  2. Run the following command to add the object to project:

    $ oc create -f </path/to/file> -n <project_name>

Rotating or removing cloud provider credentials

After installing OKD, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation.

To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.

Rotating cloud provider credentials with the Cloud Credential Operator utility

The Cloud Credential Operator (CCO) utility ccoctl supports updating secrets for clusters installed on IBM Cloud.

Rotating API keys for IBM Cloud

You can rotate API keys for your existing service IDs and update the corresponding secrets.

Prerequisites
  • You have configured the ccoctl binary.

  • You have existing service IDs in a live OKD cluster installed on IBM Cloud.

Procedure
  • Use the ccoctl utility to rotate your API keys for the service IDs and update the secrets:

    $ ccoctl ibmcloud refresh-keys \
        --kubeconfig <openshift_kubeconfig_file> \ (1)
        --credentials-requests-dir <path_to_credential_requests_directory> \ (2)
        --name <name> (3)
    
    1 The kubeconfig file associated with the cluster. For example, <installation_directory>/auth/kubeconfig.
    2 The directory where the credential requests are stored.
    3 The name of the OKD cluster.

    If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter.

Rotating cloud provider credentials manually

If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials.

The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential.

Prerequisites
  • Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using:

    • For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported.

    • For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), OpenStack, oVirt, and VMware vSphere are supported.

  • You have changed the credentials that are used to interface with your cloud provider.

  • The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster.

Procedure
  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.

  2. In the table on the Secrets page, find the root secret for your cloud provider.

    Platform Secret name

    AWS

    aws-creds

    Azure

    azure-credentials

    GCP

    gcp-credentials

    OpenStack

    openstack-credentials

    oVirt

    ovirt-credentials

    VMware vSphere

    vsphere-creds

  3. Click the Options menu kebab in the same row as the secret and select Edit Secret.

  4. Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials.

  5. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save.

  6. If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials.

    If the vSphere CSI Driver Operator is enabled, this step is not required.

    To apply the updated vSphere credentials, log in to the OKD CLI as a user with the cluster-admin role and run the following command:

    $ oc patch kubecontrollermanager cluster \
      -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date )"'"}}' \
      --type=merge

    While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true. To view the status, run the following command:

    $ oc get co kube-controller-manager
  7. If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects.

    1. Log in to the OKD CLI as a user with the cluster-admin role.

    2. Get the names and namespaces of all referenced component secrets:

      $ oc -n openshift-cloud-credential-operator get CredentialsRequest \
        -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef'

      where <provider_spec> is the corresponding value for your cloud provider:

      • AWS: AWSProviderSpec

      • GCP: GCPProviderSpec

      Partial example output for AWS
      {
        "name": "ebs-cloud-credentials",
        "namespace": "openshift-cluster-csi-drivers"
      }
      {
        "name": "cloud-credential-operator-iam-ro-creds",
        "namespace": "openshift-cloud-credential-operator"
      }
    3. Delete each of the referenced component secrets:

      $ oc delete secret <secret_name> \(1)
        -n <secret_namespace> (2)
      1 Specify the name of a secret.
      2 Specify the namespace that contains the secret.
      Example deletion of an AWS secret
      $ oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers

      You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones.

Verification

To verify that the credentials have changed:

  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.

  2. Verify that the contents of the Value field or fields have changed.

Additional resources

Removing cloud provider credentials

After installing an OKD cluster with the Cloud Credential Operator (CCO) in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The administrator-level credential is required only during changes that require its elevated permissions, such as upgrades.

Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.

Prerequisites
  • Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP.

Procedure
  1. In the Administrator perspective of the web console, navigate to WorkloadsSecrets.

  2. In the table on the Secrets page, find the root secret for your cloud provider.

    Platform Secret name

    AWS

    aws-creds

    GCP

    gcp-credentials

  3. Click the Options menu kebab in the same row as the secret and select Delete Secret.

Configuring image streams for a disconnected cluster

After installing OKD in a disconnected environment, configure the image streams for the Cluster Samples Operator and the must-gather image stream.

Cluster Samples Operator assistance for mirroring

During installation, OKD creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag.

The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name>.

During a disconnected installation of OKD, the status of the Cluster Samples Operator is set to Removed. If you choose to change it to Managed, it installs samples.

The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators’s objects to reach the services they require.

You can use this config map as a reference for which images need to be mirrored for your image streams to import.

  • While the Cluster Samples Operator is set to Removed, you can create your mirrored registry, or determine which existing mirrored registry you want to use.

  • Mirror the samples you want to the mirrored registry using the new config map as your guide.

  • Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object.

  • Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry.

  • Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored.

Using Cluster Samples Operator image streams with alternate or mirrored registries

Most image streams in the openshift namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io. Mirroring will not apply to these image streams.

The cli, installer, must-gather, and tests image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure.

The Cluster Samples Operator must be set to Managed in a disconnected environment. To install the image streams, you have a mirrored registry.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role.

  • Create a pull secret for your mirror registry.

Procedure
  1. Access the images of a specific image stream to mirror, for example:

    $ oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io
  2. Mirror images from registry.redhat.io associated with any image streams you need in the restricted network environment into one of the defined mirrors, for example:

    $ oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest ${MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest
  3. Create the cluster’s image configuration object:

    $ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
  4. Add the required trusted CAs for the mirror in the cluster’s image configuration object:

    $ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
  5. Update the samplesRegistry field in the Cluster Samples Operator configuration object to contain the hostname portion of the mirror location defined in the mirror configuration:

    $ oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator

    This is required because the image stream import process does not use the mirror or search mechanism at this time.

  6. Add any image streams that are not mirrored into the skippedImagestreams field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator to Removed in the Cluster Samples Operator configuration object.

    The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them.

    Many of the templates in the openshift namespace reference the image streams. So using Removed to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams.

Preparing your cluster to gather support data

Clusters using a restricted network must import the default must-gather image to gather debugging data for Red Hat support. The must-gather image is not imported by default, and clusters on a restricted network do not have access to the internet to pull the latest image from a remote repository.

Procedure
  1. If you have not added your mirror registry’s trusted CA to your cluster’s image configuration object as part of the Cluster Samples Operator configuration, perform the following steps:

    1. Create the cluster’s image configuration object:

      $ oc create configmap registry-config --from-file=${MIRROR_ADDR_HOSTNAME}..5000=$path/ca.crt -n openshift-config
    2. Add the required trusted CAs for the mirror in the cluster’s image configuration object:

      $ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge
  2. Import the default must-gather image from your installation payload:

    $ oc import-image is/must-gather -n openshift

When running the oc adm must-gather command, use the --image flag and point to the payload image, as in the following example:

$ oc adm must-gather --image=$(oc adm release info --image-for must-gather)

Configuring periodic importing of Cluster Sample Operator image stream tags

You can ensure that you always have access to the latest versions of the Cluster Sample Operator images by periodically importing the image stream tags when new versions become available.

Procedure
  1. Fetch all the imagestreams in the openshift namespace by running the following command:

    oc get imagestreams -nopenshift
  2. Fetch the tags for every imagestream in the openshift namespace by running the following command:

    $ oc get is <image-stream-name> -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift

    For example:

    $ oc get is ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift
    Example output
    1.11	registry.access.redhat.com/ubi8/openjdk-17:1.11
    1.12	registry.access.redhat.com/ubi8/openjdk-17:1.12
  3. Schedule periodic importing of images for each tag present in the image stream by running the following command:

    $ oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift

    For example:

    $ oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift
    $ oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift

    This command causes OKD to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default.

  4. Verify the scheduling status of the periodic import by running the following command:

    oc get imagestream <image-stream-name> -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift

    For example:

    oc get imagestream ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift
    Example output
    Tag: 1.11	Scheduled: true
    Tag: 1.12	Scheduled: true