This is a cache of https://docs.okd.io/latest/etcd/etcd-backup-restore/etcd-disaster-recovery.html. It is a snapshot of the page at 2025-05-20T18:56:55.306+0000.
Disaster recovery - Backing up and restoring etcd data | etcd | OKD 4
×

The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OKD cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state.

Disaster recovery requires you to have at least one healthy control plane host.

Quorum restoration

You can use the quorum-restore.sh script to restore etcd quorum on clusters that are offline due to quorum loss. When quorum is lost, the OKD API becomes read-only. After quorum is restored, the OKD API returns to read/write mode.

Restoring etcd quorum for high availability clusters

You can use the quorum-restore.sh script to restore etcd quorum on clusters that are offline due to quorum loss. When quorum is lost, the OKD API becomes read-only. After quorum is restored, the OKD API returns to read/write mode.

The quorum-restore.sh script instantly brings back a new single-member etcd cluster based on its local data directory and marks all other members as invalid by retiring the previous cluster identifier. No prior backup is required to restore the control plane from.

For high availability (HA) clusters, a three-node HA cluster requires you to shut down etcd on two hosts to avoid a cluster split. On four-node and five-node HA clusters, you must shut down three hosts. Quorum requires a simple majority of nodes. The minimum number of nodes required for quorum on a three-node HA cluster is two. On four-node and five-node HA clusters, the minimum number of nodes required for quorum is three. If you start a new cluster from backup on your recovery host, the other etcd members might still be able to form quorum and continue service.

You might experience data loss if the host that runs the restoration does not have all data replicated to it.

Quorum restoration should not be used to decrease the number of nodes outside of the restoration process. Decreasing the number of nodes results in an unsupported cluster configuration.

Prerequisites
  • You have SSH access to the node used to restore quorum.

Procedure
  1. Select a control plane host to use as the recovery host. You run the restore operation on this host.

    1. List the running etcd pods by running the following command:

      $ oc get pods -n openshift-etcd -l app=etcd --field-selector="status.phase==Running"
    2. Choose a pod and run the following command to obtain its IP address:

      $ oc exec -n openshift-etcd <etcd-pod> -c etcdctl -- etcdctl endpoint status -w table

      Note the IP address of a member that is not a learner and has the highest Raft index.

    3. Run the following command and note the node name that corresponds to the IP address of the chosen etcd member:

      $ oc get nodes -o jsonpath='{range .items[*]}[{.metadata.name},{.status.addresses[?(@.type=="InternalIP")].address}]{end}'
  2. Using SSH, connect to the chosen recovery node and run the following command to restore etcd quorum:

    $ sudo -E /usr/local/bin/quorum-restore.sh

    After a few minutes, the nodes that went down are automatically synchronized with the node that the recovery script was run on. Any remaining online nodes automatically rejoin the new etcd cluster created by the quorum-restore.sh script. This process takes a few minutes.

  3. Exit the SSH session.

  4. Return to a three-node configuration if any nodes are offline. Repeat the following steps for each node that is offline to delete and re-create them. After the machines are re-created, a new revision is forced and etcd automatically scales up.

    • If you use a user-provisioned bare-metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal".

      Do not delete and re-create the machine for the recovery host.

    • If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps:

      Do not delete and re-create the machine for the recovery host.

      For bare-metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node".

      1. Obtain the machine for one of the offline nodes.

        In a terminal that has access to the cluster as a cluster-admin user, run the following command:

        $ oc get machines -n openshift-machine-api -o wide
        Example output:
        NAME                                        PHASE     TYPE        REGION      ZONE         AGE     NODE                           PROVIDERID                              STATE
        clustername-8qw5l-master-0                  Running   m4.xlarge   us-east-1   us-east-1a   3h37m   ip-10-0-131-183.ec2.internal   aws:///us-east-1a/i-0ec2782f8287dfb7e   stopped (1)
        clustername-8qw5l-master-1                  Running   m4.xlarge   us-east-1   us-east-1b   3h37m   ip-10-0-143-125.ec2.internal   aws:///us-east-1b/i-096c349b700a19631   running
        clustername-8qw5l-master-2                  Running   m4.xlarge   us-east-1   us-east-1c   3h37m   ip-10-0-154-194.ec2.internal    aws:///us-east-1c/i-02626f1dba9ed5bba  running
        clustername-8qw5l-worker-us-east-1a-wbtgd   Running   m4.large    us-east-1   us-east-1a   3h28m   ip-10-0-129-226.ec2.internal   aws:///us-east-1a/i-010ef6279b4662ced   running
        clustername-8qw5l-worker-us-east-1b-lrdxb   Running   m4.large    us-east-1   us-east-1b   3h28m   ip-10-0-144-248.ec2.internal   aws:///us-east-1b/i-0cb45ac45a166173b   running
        clustername-8qw5l-worker-us-east-1c-pkg26   Running   m4.large    us-east-1   us-east-1c   3h28m   ip-10-0-170-181.ec2.internal   aws:///us-east-1c/i-06861c00007751b0a   running
        1 This is the control plane machine for the offline node, ip-10-0-131-183.ec2.internal.
      2. Delete the machine of the offline node by running:

        $ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 (1)
        1 Specify the name of the control plane machine for the offline node.

        A new machine is automatically provisioned after deleting the machine of the offline node.

  5. Verify that a new machine has been created by running:

    $ oc get machines -n openshift-machine-api -o wide
    Example output:
    NAME                                        PHASE          TYPE        REGION      ZONE         AGE     NODE                           PROVIDERID                              STATE
    clustername-8qw5l-master-1                  Running        m4.xlarge   us-east-1   us-east-1b   3h37m   ip-10-0-143-125.ec2.internal   aws:///us-east-1b/i-096c349b700a19631   running
    clustername-8qw5l-master-2                  Running        m4.xlarge   us-east-1   us-east-1c   3h37m   ip-10-0-154-194.ec2.internal    aws:///us-east-1c/i-02626f1dba9ed5bba  running
    clustername-8qw5l-master-3                  Provisioning   m4.xlarge   us-east-1   us-east-1a   85s     ip-10-0-173-171.ec2.internal    aws:///us-east-1a/i-015b0888fe17bc2c8  running (1)
    clustername-8qw5l-worker-us-east-1a-wbtgd   Running        m4.large    us-east-1   us-east-1a   3h28m   ip-10-0-129-226.ec2.internal   aws:///us-east-1a/i-010ef6279b4662ced   running
    clustername-8qw5l-worker-us-east-1b-lrdxb   Running        m4.large    us-east-1   us-east-1b   3h28m   ip-10-0-144-248.ec2.internal   aws:///us-east-1b/i-0cb45ac45a166173b   running
    clustername-8qw5l-worker-us-east-1c-pkg26   Running        m4.large    us-east-1   us-east-1c   3h28m   ip-10-0-170-181.ec2.internal   aws:///us-east-1c/i-06861c00007751b0a   running
    1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running.

    It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically synchronize when the machine or node returns to a healthy state.

    1. Repeat these steps for each node that is offline.

  6. Wait until the control plane recovers by running the following command:

    $ oc adm wait-for-stable-cluster

    It can take up to 15 minutes for the control plane to recover.

Troubleshooting
  • If you see no progress rolling out the etcd static pods, you can force redeployment from the etcd cluster Operator by running the following command:

    $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$(date --rfc-3339=ns )"'"}}' --type=merge

If you have a majority of your control plane nodes still available and have an etcd quorum, replace a single unhealthy etcd member.

Restoring to a previous cluster state

To restore the cluster to a previous state, you must have previously backed up the etcd data by creating a snapshot. You will use this snapshot to restore the cluster state. For more information, see "Backing up etcd data".

If applicable, you might also need to recover from expired control plane certificates.

Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort.

Before performing a restore, see "About restoring to a previous cluster state" for more information on the impact to the cluster.

About restoring to a previous cluster state

To restore the cluster to a previous state, you must have previously backed up the etcd data by creating a snapshot. You will use this snapshot to restore the cluster state. For more information, see "Backing up etcd data".

You can use an etcd backup to restore your cluster to a previous state. This can be used to recover from the following situations:

  • The cluster has lost the majority of control plane hosts (quorum loss).

  • An administrator has deleted something critical and must restore to recover the cluster.

Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort.

If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup.

Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, persistent volume controllers, and OKD Operators, including the network Operator.

It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues.

In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates.

Restoring to a previous cluster state for a single node

You can use a saved etcd backup to restore a previous cluster state on a single node.

When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.2 cluster must use an etcd backup that was taken from 4.2.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation.

  • You have SSH access to control plane hosts.

  • A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz.

Procedure
  1. Use SSH to connect to the single node and copy the etcd backup to the /home/core directory by running the following command:

    $ cp <etcd_backup_directory> /home/core
  2. Run the following command in the single node to restore the cluster from a previous backup:

    $ sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd_backup_directory>
  3. Exit the SSH session.

  4. Monitor the recovery progress of the control plane by running the following command:

    $ oc adm wait-for-stable-cluster

    It can take up to 15 minutes for the control plane to recover.

Restoring to a previous cluster state for more than one node

You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.

For high availability (HA) clusters, a three-node HA cluster requires you to shut down etcd on two hosts to avoid a cluster split. On four-node and five-node HA clusters, you must shut down three hosts. Quorum requires a simple majority of nodes. The minimum number of nodes required for quorum on a three-node HA cluster is two. On four-node and five-node HA clusters, the minimum number of nodes required for quorum is three. If you start a new cluster from backup on your recovery host, the other etcd members might still be able to form quorum and continue service.

If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. For OKD on a single node, see "Restoring to a previous cluster state for a single node".

When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.2 cluster must use an etcd backup that was taken from 4.2.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation.

  • A healthy control plane host to use as the recovery host.

  • You have SSH access to control plane hosts.

  • A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz.

For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one.

Procedure
  1. Select a control plane host to use as the recovery host. This is the host that you run the restore operation on.

  2. Establish SSH connectivity to each of the control plane nodes, including the recovery host.

    kube-apiserver becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.

    If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.

  3. Using SSH, connect to each control plane node and run the following command to disable etcd:

    $ sudo -E /usr/local/bin/disable-etcd.sh
  4. Copy the etcd backup directory to the recovery control plane host.

    This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host.

  5. Use SSH to connect to the recovery host and restore the cluster from a previous backup by running the following command:

    $ sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd-backup-directory>
  6. Exit the SSH session.

  7. Once the API responds, turn off the etcd Operator quorum guard by runnning the following command:

    $ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
  8. Monitor the recovery progress of the control plane by running the following command:

    $ oc adm wait-for-stable-cluster

    It can take up to 15 minutes for the control plane to recover.

  9. Once recovered, enable the quorum guard by running the following command:

    $ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
Troubleshooting

If you see no progress rolling out the etcd static pods, you can force redeployment from the cluster-etcd-operator by running the following command:

$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$(date --rfc-3339=ns )"'"}}' --type=merge

Restoring a cluster manually from an etcd backup

The restore procedure described in the section "Restoring to a previous cluster state":

  • Requires the complete recreation of 2 control plane nodes, which might be a complex procedure for clusters installed with the UPI installation method, since an UPI installation does not create any Machine or ControlPlaneMachineset for the control plane nodes.

  • Uses the script /usr/local/bin/cluster-restore.sh, which starts a new single-member etcd cluster and then scales it to three members.

In contrast, this procedure:

  • Does not require recreating any control plane nodes.

  • Directly starts a three-member etcd cluster.

If the cluster uses a MachineSet for the control plane, it is suggested to use the "Restoring to a previous cluster state" for a simpler etcd recovery procedure.

When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.7.2 cluster must use an etcd backup that was taken from 4.7.2.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role; for example, the kubeadmin user.

  • SSH access to all control plane hosts, with a host user allowed to become root; for example, the default core host user.

  • A backup directory containing both a previous etcd snapshot and the resources for the static pods from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz.

Procedure
  1. Use SSH to connect to each of the control plane nodes.

    The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to use a SSH connection for each control plane host you are accessing in a separate terminal.

    If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.

  2. Copy the etcd backup directory to each control plane host.

    This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/assets directory of each control plane host. You might need to create such assets folder if it does not exist yet.

  3. Stop the static pods on all the control plane nodes; one host at a time.

    1. Move the existing Kubernetes API Server static pod manifest out of the kubelet manifest directory.

      $ mkdir -p /root/manifests-backup
      $ mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /root/manifests-backup/
    2. Verify that the Kubernetes API Server containers have stopped with the command:

      $ crictl ps | grep kube-apiserver | grep -E -v "operator|guard"

      The output of this command should be empty. If it is not empty, wait a few minutes and check again.

    3. If the Kubernetes API Server containers are still running, terminate them manually with the following command:

      $ crictl stop <container_id>
    4. Repeat the same steps for kube-controller-manager-pod.yaml, kube-scheduler-pod.yaml and finally etcd-pod.yaml.

      1. Stop the kube-controller-manager pod with the following command:

        $ mv /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /root/manifests-backup/
      2. Check if the containers are stopped using the following command:

        $ crictl ps | grep kube-controller-manager | grep -E -v "operator|guard"
      3. Stop the kube-scheduler pod using the following command:

        $ mv /etc/kubernetes/manifests/kube-scheduler-pod.yaml /root/manifests-backup/
      4. Check if the containers are stopped using the following command:

        $ crictl ps | grep kube-scheduler | grep -E -v "operator|guard"
      5. Stop the etcd pod using the following command:

        $ mv /etc/kubernetes/manifests/etcd-pod.yaml /root/manifests-backup/
      6. Check if the containers are stopped using the following command:

        $ crictl ps | grep etcd | grep -E -v "operator|guard"
  4. On each control plane host, save the current etcd data, by moving it into the backup folder:

    $ mkdir /home/core/assets/old-member-data
    $ mv /var/lib/etcd/member /home/core/assets/old-member-data

    This data will be useful in case the etcd backup restore does not work and the etcd cluster must be restored to the current state.

  5. Find the correct etcd parameters for each control plane host.

    1. The value for <ETCD_NAME> is unique for the each control plane host, and it is equal to the value of the ETCD_NAME variable in the manifest /etc/kubernetes/static-pod-resources/etcd-certs/configmaps/restore-etcd-pod/pod.yaml file in the specific control plane host. It can be found with the command:

      RESTORE_ETCD_POD_YAML="/etc/kubernetes/static-pod-resources/etcd-certs/configmaps/restore-etcd-pod/pod.yaml"
      cat $RESTORE_ETCD_POD_YAML | \
        grep -A 1 $(cat $RESTORE_ETCD_POD_YAML | grep 'export ETCD_NAME' | grep -Eo 'NODE_.+_ETCD_NAME') | \
        grep -Po '(?<=value: ").+(?=")'
    2. The value for <UUID> can be generated in a control plane host with the command:

      $ uuidgen

      The value for <UUID> must be generated only once. After generating UUID on one control plane host, do not generate it again on the others. The same UUID will be used in the next steps on all control plane hosts.

    3. The value for ETCD_NODE_PEER_URL should be set like the following example:

      https://<IP_CURRENT_HOST>:2380

      The correct IP can be found from the <ETCD_NAME> of the specific control plane host, with the command:

      $ echo <ETCD_NAME> | \
        sed -E 's/[.-]/_/g' | \
        xargs -I {} grep {} /etc/kubernetes/static-pod-resources/etcd-certs/configmaps/etcd-scripts/etcd.env | \
        grep "IP" | grep -Po '(?<=").+(?=")'
    4. The value for <ETCD_INITIAL_CLUSTER> should be set like the following, where <ETCD_NAME_n> is the <ETCD_NAME> of each control plane host.

      The port used must be 2380 and not 2379. The port 2379 is used for etcd database management and is configured directly in etcd start command in container.

      Example output
      <ETCD_NAME_0>=<ETCD_NODE_PEER_URL_0>,<ETCD_NAME_1>=<ETCD_NODE_PEER_URL_1>,<ETCD_NAME_2>=<ETCD_NODE_PEER_URL_2> (1)
      1 Specifies the ETCD_NODE_PEER_URL values from each control plane host.

      The <ETCD_INITIAL_CLUSTER> value remains same across all control plane hosts. The same value is required in the next steps on every control plane host.

  6. Regenerate the etcd database from the backup.

    Such operation must be executed on each control plane host.

    1. Copy the etcd backup to /var/lib/etcd directory with the command:

      $ cp /home/core/assets/backup/<snapshot_yyyy-mm-dd_hhmmss>.db /var/lib/etcd
    2. Identify the correct etcdctl image before proceeding. Use the following command to retrieve the image from the backup of the pod manifest:

      $ jq -r '.spec.containers[]|select(.name=="etcdctl")|.image' /root/manifests-backup/etcd-pod.yaml
      $ podman run --rm -it --entrypoint="/bin/bash" -v /var/lib/etcd:/var/lib/etcd:z <image-hash>
    3. Check that the version of the etcdctl tool is the version of the etcd server where the backup was created:

      $ etcdctl version
    4. Run the following command to regenerate the etcd database, using the correct values for the current host:

      $ ETCDCTL_API=3 /usr/bin/etcdctl snapshot restore /var/lib/etcd/<snapshot_yyyy-mm-dd_hhmmss>.db \
        --name "<ETCD_NAME>" \
        --initial-cluster="<ETCD_INITIAL_CLUSTER>" \
        --initial-cluster-token "openshift-etcd-<UUID>" \
        --initial-advertise-peer-urls "<ETCD_NODE_PEER_URL>" \
        --data-dir="/var/lib/etcd/restore-<UUID>" \
        --skip-hash-check=true

      The quotes are mandatory when regenerating the etcd database.

  7. Record the values printed in the added member logs; for example:

    Example output
    2022-06-28T19:52:43Z    info    membership/cluster.go:421   added member    {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "56cd73b614699e7", "added-peer-peer-urls": ["https://10.0.91.5:2380"], "added-peer-is-learner": false}
    2022-06-28T19:52:43Z    info    membership/cluster.go:421   added member    {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "1f63d01b31bb9a9e", "added-peer-peer-urls": ["https://10.0.90.221:2380"], "added-peer-is-learner": false}
    2022-06-28T19:52:43Z    info    membership/cluster.go:421   added member    {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "fdc2725b3b70127c", "added-peer-peer-urls": ["https://10.0.94.214:2380"], "added-peer-is-learner": false}
    1. Exit from the container.

    2. Repeat these steps on the other control plane hosts, checking that the values printed in the added member logs are the same for all control plane hosts.

  8. Move the regenerated etcd database to the default location.

    Such operation must be executed on each control plane host.

    1. Move the regenerated database (the member folder created by the previous etcdctl snapshot restore command) to the default etcd location /var/lib/etcd:

      $ mv /var/lib/etcd/restore-<UUID>/member /var/lib/etcd
    2. Restore the SELinux context for /var/lib/etcd/member folder on /var/lib/etcd directory:

      $ restorecon -vR /var/lib/etcd/
    3. Remove the leftover files and directories:

      $ rm -rf /var/lib/etcd/restore-<UUID>
      $ rm /var/lib/etcd/<snapshot_yyyy-mm-dd_hhmmss>.db

      When you are finished the /var/lib/etcd directory must contain only the folder member.

    4. Repeat these steps on the other control plane hosts.

  9. Restart the etcd cluster.

    1. The following steps must be executed on all control plane hosts, but one host at a time.

    2. Move the etcd static pod manifest back to the kubelet manifest directory, in order to make kubelet start the related containers :

      $ mv /tmp/etcd-pod.yaml /etc/kubernetes/manifests
    3. Verify that all the etcd containers have started:

      $ crictl ps | grep etcd | grep -v operator
      Example output
      38c814767ad983       f79db5a8799fd2c08960ad9ee22f784b9fbe23babe008e8a3bf68323f004c840                                                         28 seconds ago       Running             etcd-health-monitor                   2                   fe4b9c3d6483c
      e1646b15207c6       9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06                                                         About a minute ago   Running             etcd-metrics                          0                   fe4b9c3d6483c
      08ba29b1f58a7       9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06                                                         About a minute ago   Running             etcd                                  0                   fe4b9c3d6483c
      2ddc9eda16f53       9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06                                                         About a minute ago   Running             etcdctl

      If the output of this command is empty, wait a few minutes and check again.

  10. Check the status of the etcd cluster.

    1. On any of the control plane hosts, check the status of the etcd cluster with the following command:

      $ crictl exec -it $(crictl ps | grep etcdctl | awk '{print $1}') etcdctl endpoint status -w table
      Example output
      +--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
      +--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      | https://10.0.89.133:2379 | 682e4a83a0cec6c0 |   3.5.0 |   67 MB |      true |      false |         2 |        218 |                218 |        |
      |  https://10.0.92.74:2379 | 450bcf6999538512 |   3.5.0 |   67 MB |     false |      false |         2 |        218 |                218 |        |
      | https://10.0.93.129:2379 | 358efa9c1d91c3d6 |   3.5.0 |   67 MB |     false |      false |         2 |        218 |                218 |        |
      +--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  11. Restart the other static pods.

    The following steps must be executed on all control plane hosts, but one host at a time.

    1. Move the Kubernetes API Server static pod manifest back to the kubelet manifest directory to make kubelet start the related containers with the command:

      $ mv /root/manifests-backup/kube-apiserver-pod.yaml /etc/kubernetes/manifests
    2. Verify that all the Kubernetes API Server containers have started:

      $ crictl ps | grep kube-apiserver | grep -v operator

      if the output of the following command is empty, wait a few minutes and check again.

    3. Repeat the same steps for kube-controller-manager-pod.yaml and kube-scheduler-pod.yaml files.

      1. Restart the kubelets in all nodes using the following command:

        $ systemctl restart kubelet
      2. Start the remaining control plane pods using the following command:

        $ mv /root/manifests-backup/kube-* /etc/kubernetes/manifests/
      3. Check if the kube-apiserver, kube-scheduler and kube-controller-manager pods start correctly:

        $ crictl ps | grep -E 'kube-(apiserver|scheduler|controller-manager)' | grep -v -E 'operator|guard'
      4. Wipe the OVN databases using the following commands:

        for NODE in  $(oc get node -o name | sed 's:node/::g')
        do
          oc debug node/${NODE} -- chroot /host /bin/bash -c  'rm -f /var/lib/ovn-ic/etc/ovn*.db && systemctl restart ovs-vswitchd ovsdb-server'
          oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName=${NODE} --wait
          oc -n openshift-ovn-kubernetes wait pod -l app=ovnkube-node --field-selector=spec.nodeName=${NODE} --for condition=ContainersReady --timeout=600s
        done

Issues and workarounds for restoring a persistent storage state

If your OKD cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OKD is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.

The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OKD cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa.

The following are some example scenarios that produce an out-of-date status:

  • MySQL database is running in a pod backed up by a PV object. Restoring OKD from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.

  • Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OKD is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.

  • Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.

  • A device is removed or renamed from OKD nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist.

    To fix this problem, an administrator must:

    1. Manually remove the PVs with invalid devices.

    2. Remove symlinks from respective nodes.

    3. Delete LocalVolume or LocalVolumeSet objects (see StorageConfiguring persistent storagePersistent storage using local volumesDeleting the Local Storage Operator Resources).

Recovering from expired control plane certificates

The cluster can automatically recover from expired control plane certificates.

However, you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. For user-provisioned installations, you might also need to approve pending kubelet serving CSRs.

Use the following steps to approve the pending CSRs:

Procedure
  1. Get the list of current CSRs:

    $ oc get csr
    Example output
    NAME        AGE    SIGNERNAME                                    REQUESTOR                                                                   CONDITION
    csr-2s94x   8m3s   kubernetes.io/kubelet-serving                 system:node:<node_name>                                                     Pending (1)
    csr-4bd6t   8m3s   kubernetes.io/kubelet-serving                 system:node:<node_name>                                                     Pending
    csr-4hl85   13m    kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending (2)
    csr-zhhhp   3m8s   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...
    1 A pending kubelet service CSR (for user-provisioned installations).
    2 A pending node-bootstrapper CSR.
  2. Review the details of a CSR to verify that it is valid:

    $ oc describe csr <csr_name> (1)
    1 <csr_name> is the name of a CSR from the list of current CSRs.
  3. Approve each valid node-bootstrapper CSR:

    $ oc adm certificate approve <csr_name>
  4. For user-provisioned installations, approve each valid kubelet serving CSR:

    $ oc adm certificate approve <csr_name>

Testing restore procedures

Testing the restore procedure is important to ensure that your automation and workload handle the new cluster state gracefully. Due to the complex nature of etcd quorum and the etcd Operator attempting to mend automatically, it is often difficult to correctly bring your cluster into a broken enough state that it can be restored.

You must have SSH access to the cluster. Your cluster might be entirely lost without SSH access.

Prerequisites
  • You have SSH access to control plane hosts.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Use SSH to connect to each of your nonrecovery nodes and run the following commands to disable etcd and the kubelet service:

    1. Disable etcd by running the following command:

      $ sudo /usr/local/bin/disable-etcd.sh
    2. Delete variable data for etcd by running the following command:

      $ sudo rm -rf /var/lib/etcd
    3. Disable the kubelet service by running the following command:

      $ sudo systemctl disable kubelet.service
  2. Exit every SSH session.

  3. Run the following command to ensure that your nonrecovery nodes are in a NOT READY state:

    $ oc get nodes
  4. Follow the steps in "Restoring to a previous cluster state" to restore your cluster.

  5. After you restore the cluster and the API responds, use SSH to connect to each nonrecovery node and enable the kubelet service:

    $ sudo systemctl enable kubelet.service
  6. Exit every SSH session.

  7. Run the following command to observe your nodes coming back into the READY state:

    $ oc get nodes
  8. Run the following command to verify that etcd is available:

    $ oc get pods -n openshift-etcd