This is a cache of https://docs.okd.io/latest/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.html. It is a snapshot of the page at 2025-02-22T19:26:28.344+0000.
Restoring to a previous cluster state - Control plane backup and restore | Backup and restore | OKD 4
×

To restore the cluster to a previous state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state.

About restoring cluster state

You can use an etcd backup to restore your cluster to a previous state. This can be used to recover from the following situations:

  • The cluster has lost the majority of control plane hosts (quorum loss).

  • An administrator has deleted something critical and must restore to recover the cluster.

Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort.

If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup.

Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, persistent volume controllers, and OKD Operators, including the network Operator.

It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues.

In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates.

Restoring to a previous cluster state for a single node

You can use a saved etcd backup to restore a previous cluster state on a single node.

When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.2 cluster must use an etcd backup that was taken from 4.2.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation.

  • You have SSH access to control plane hosts.

  • A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz.

Procedure
  1. Use SSH to connect to the single node and copy the etcd backup to the /home/core directory by running the following command:

    $ cp <etcd_backup_directory> /home/core
  2. Run the following command in the single node to restore the cluster from a previous backup:

    $ sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd_backup_directory>
  3. Exit the SSH session.

  4. Monitor the recovery progress of the control plane by running the following command:

    $ oc adm wait-for-stable-cluster

    It can take up to 15 minutes for the control plane to recover.

Restoring to a previous cluster state

You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.

For high availability (HA) clusters, a three-node HA cluster requires you to shutdown etcd on two hosts to avoid a cluster split. Quorum requires a simple majority of nodes. The minimum number of nodes required for quorum on a three-node HA cluster is two. If you start a new cluster from backup on your recovery host, the other etcd members might still be able to form quorum and continue service.

If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. For OKD on a single node, see "Restoring to a previous cluster state for a single node".

When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.2 cluster must use an etcd backup that was taken from 4.2.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation.

  • A healthy control plane host to use as the recovery host.

  • You have SSH access to control plane hosts.

  • A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz.

For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one.

Procedure
  1. Select a control plane host to use as the recovery host. This is the host that you run the restore operation on.

  2. Establish SSH connectivity to each of the control plane nodes, including the recovery host.

    kube-apiserver becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.

    If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.

  3. Using SSH, connect to each control plane node and run the following command to disable etcd:

    $ sudo -E /usr/local/bin/disable-etcd.sh
  4. Copy the etcd backup directory to the recovery control plane host.

    This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host.

  5. Use SSH to connect to the recovery host and restore the cluster from a previous backup by running the following command:

    $ sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd-backup-directory>
  6. Exit the SSH session.

  7. Once the API responds, turn off the etcd Operator quorum guard by runnning the following command:

    $ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
  8. Monitor the recovery progress of the control plane by running the following command:

    $ oc adm wait-for-stable-cluster

    It can take up to 15 minutes for the control plane to recover.

  9. Once recovered, enable the quorum guard by running the following command:

    $ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
Troubleshooting

If you see no progress rolling out the etcd static pods, you can force redeployment from the cluster-etcd-operator by running the following command:

$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$(date --rfc-3339=ns )"'"}}' --type=merge

Issues and workarounds for restoring a persistent storage state

If your OKD cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OKD is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.

The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OKD cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa.

The following are some example scenarios that produce an out-of-date status:

  • MySQL database is running in a pod backed up by a PV object. Restoring OKD from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.

  • Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OKD is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.

  • Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.

  • A device is removed or renamed from OKD nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist.

    To fix this problem, an administrator must:

    1. Manually remove the PVs with invalid devices.

    2. Remove symlinks from respective nodes.

    3. Delete LocalVolume or LocalVolumeSet objects (see StorageConfiguring persistent storagePersistent storage using local volumesDeleting the Local Storage Operator Resources).