$ sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp
To restore the cluster to a previous state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state.
You can use an etcd backup to restore your cluster to a previous state. This can be used to recover from the following situations:
The cluster has lost the majority of control plane hosts (quorum loss).
An administrator has deleted something critical and must restore to recover the cluster.
Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort. If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup. |
Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, SDN controllers, and persistent volume controllers.
It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues.
In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates.
You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.
If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. |
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2. |
Access to the cluster as a user with the cluster-admin
role through a certificate-based kubeconfig
file, like the one that was used during installation.
A healthy control plane host to use as the recovery host.
SSH access to control plane hosts.
A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db
and static_kuberesources_<datetimestamp>.tar.gz
.
For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one. |
Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.
establish SSH connectivity to each of the control plane nodes, including the recovery host.
The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.
If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. |
Copy the etcd backup directory to the recovery control plane host.
This procedure assumes that you copied the backup
directory containing the etcd snapshot and the resources for the static pods to the /home/core/
directory of your recovery control plane host.
Stop the static pods on any other control plane nodes.
You do not need to stop the static pods on the recovery host. |
Access a control plane host that is not the recovery host.
Move the existing etcd pod file out of the kubelet manifest directory:
$ sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp
Verify that the etcd pods are stopped.
$ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the existing Kubernetes API server pod file out of the kubelet manifest directory:
$ sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
Verify that the Kubernetes API server pods are stopped.
$ sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard"
The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the etcd data directory to a different location:
$ sudo mv -v /var/lib/etcd/ /tmp
If the /etc/kubernetes/manifests/keepalived.yaml
file exists and the node is deleted, follow these steps:
Move the /etc/kubernetes/manifests/keepalived.yaml
file out of the kubelet manifest directory:
$ sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp
Verify that any containers managed by the keepalived
daemon are stopped:
$ sudo crictl ps --name keepalived
The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Check if the control plane has any Virtual IPs (VIPs) assigned to it:
$ ip -o address | egrep '<api_vip>|<ingress_vip>'
For each reported VIP, run the following command to remove it:
$ sudo ip address del <reported_vip> dev <reported_vip_device>
Repeat this step on each of the other control plane hosts that is not the recovery host.
Access the recovery control plane host.
If the keepalived
daemon is in use, verify that the recovery control plane node owns the VIP:
$ ip -o address | grep <api_vip>
The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly.
If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY
, HTTP_PROXY
, and HTTPS_PROXY
environment variables.
You can check whether the proxy is enabled by reviewing the output of |
Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory:
$ sudo -e /usr/local/bin/cluster-restore.sh /home/core/assets/backup
...stopping kube-scheduler-pod.yaml
...stopping kube-controller-manager-pod.yaml
...stopping etcd-pod.yaml
...stopping kube-apiserver-pod.yaml
Waiting for container etcd to stop
.complete
Waiting for container etcdctl to stop
.............................complete
Waiting for container etcd-metrics to stop
complete
Waiting for container kube-controller-manager to stop
complete
Waiting for container kube-apiserver to stop
..........................................................................................complete
Waiting for container kube-scheduler to stop
complete
Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup
starting restore-etcd static pod
starting kube-apiserver-pod.yaml
static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml
starting kube-controller-manager-pod.yaml
static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml
starting kube-scheduler-pod.yaml
static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml
The restore process can cause nodes to enter the |
Check the nodes to ensure they are in the Ready
state.
Run the following command:
$ oc get nodes -w
NAMe STATUS ROLeS AGe VeRSION
host-172-25-75-28 Ready master 3d20h v1.25.0
host-172-25-75-38 Ready infra,worker 3d20h v1.25.0
host-172-25-75-40 Ready master 3d20h v1.25.0
host-172-25-75-65 Ready master 3d20h v1.25.0
host-172-25-75-74 Ready infra,worker 3d20h v1.25.0
host-172-25-75-79 Ready worker 3d20h v1.25.0
host-172-25-75-86 Ready worker 3d20h v1.25.0
host-172-25-75-98 Ready infra,worker 3d20h v1.25.0
It can take several minutes for all nodes to report their state.
If any nodes are in the NotReady
state, log in to the nodes and remove all of the PeM files from the /var/lib/kubelet/pki
directory on each node. You can SSH into the nodes or use the terminal window in the web console.
$ ssh -i <ssh-key-path> core@<master-hostname>
pki
directorysh-4.4# pwd
/var/lib/kubelet/pki
sh-4.4# ls
kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem
kubelet-client-current.pem kubelet-server-current.pem
Restart the kubelet service on all control plane hosts.
From the recovery host, run the following command:
$ sudo systemctl restart kubelet.service
Repeat this step on all other control plane hosts.
Approve the pending CSRs:
Clusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step. |
Get the list of current CSRs:
$ oc get csr
NAMe AGe SIGNeRNAMe ReQUeSTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending (1) csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending (1) csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending (2) csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending (2) ...
1 | A pending kubelet service CSR (for user-provisioned installations). |
2 | A pending node-bootstrapper CSR. |
Review the details of a CSR to verify that it is valid:
$ oc describe csr <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
Approve each valid node-bootstrapper
CSR:
$ oc adm certificate approve <csr_name>
For user-provisioned installations, approve each valid kubelet service CSR:
$ oc adm certificate approve <csr_name>
Verify that the single member control plane has started successfully.
From the recovery host, verify that the etcd container is running.
$ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
From the recovery host, verify that the etcd pod is running.
$ oc -n openshift-etcd get pods -l k8s-app=etcd
NAMe ReADY STATUS ReSTARTS AGe
etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
If the status is Pending
, or the output lists more than one running etcd pod, wait a few minutes and check again.
If you are using the OVNKubernetes
network plugin, delete the node objects that are associated with control plane hosts that are not the recovery control plane host.
$ oc delete node <non-recovery-controlplane-host-1> <non-recovery-controlplane-host-2>
Verify that the Cluster Network Operator (CNO) redeploys the OVN-Kubernetes control plane and that it no longer references the non-recovery controller IP addresses. To verify this result, regularly check the output of the following command. Wait until it returns an empty result before you proceed to restart the Open Virtual Network (OVN) Kubernetes pods on all of the hosts in the next step.
$ oc -n openshift-ovn-kubernetes get ds/ovnkube-master -o yaml | grep -e '<non-recovery_controller_ip_1>|<non-recovery_controller_ip_2>'
It can take at least 5-10 minutes for the OVN-Kubernetes control plane to be redeployed and the previous command to return empty output. |
If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all of the hosts.
Validating and mutating admission webhooks can reject pods. If you add any additional webhooks with the Alternatively, you can temporarily set the |
Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run the following command:
$ sudo rm -f /var/lib/ovn/etc/*.db
Delete all OVN-Kubernetes control plane pods by running the following command:
$ oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes
ensure that any OVN-Kubernetes control plane pods are deployed again and are in a Running
state by running the following command:
$ oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes
NAMe ReADY STATUS ReSTARTS AGe
ovnkube-master-nb24h 4/4 Running 0 48s
Delete all ovnkube-node
pods by running the following command:
$ oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete $p -n openshift-ovn-kubernetes ; done
ensure that all the ovnkube-node
pods are deployed again and are in a Running
state by running the following command:
$ oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node
Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and etcd automatically scales up.
If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal".
Do not delete and re-create the machine for the recovery host. |
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps:
Do not delete and re-create the machine for the recovery host. For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". |
Obtain the machine for one of the lost control plane hosts.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
$ oc get machines -n openshift-machine-api -o wide
example output:
NAMe PHASe TYPe ReGION ZONe AGe NODe PROVIDeRID STATe
clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped (1)
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
1 | This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal . |
Delete the machine of the lost control plane host by running:
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 (1)
1 | Specify the name of the control plane machine for the lost control plane host. |
A new machine is automatically provisioned after deleting the machine of the lost control plane host.
Verify that a new machine has been created by running:
$ oc get machines -n openshift-machine-api -o wide
example output:
NAMe PHASe TYPe ReGION ZONe AGe NODe PROVIDeRID STATe
clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running (1)
clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
1 | The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . |
It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Repeat these steps for each lost control plane host that is not the recovery host.
Turn off the quorum guard by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableetcd": true}}}'
This command ensures that you can successfully re-create secrets and roll out the static pods.
In a separate terminal window within the recovery host, export the recovery kubeconfig
file by running the following command:
$ export KUBeCONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig
Force etcd redeployment.
In the same terminal window where you exported the recovery kubeconfig
file, run the following command:
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge (1)
1 | The forceRedeploymentReason value must be unique, which is why a timestamp is appended. |
When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.
Turn the quorum guard back on by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
You can verify that the unsupportedConfigOverrides
section is removed from the object by entering this command:
$ oc get etcd/cluster -oyaml
Verify all nodes are updated to the latest revision.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the NodeInstallerProgressing
status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision
upon successful update:
AllNodesAtLatestRevision
3 nodes are at revision 7 (1)
1 | In this example, the latest revision number is 7 . |
If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.
After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.
In a terminal that has access to the cluster as a cluster-admin
user, run the following commands.
Force a new rollout for the Kubernetes API server:
$ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Verify all nodes are updated to the latest revision.
$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision
upon successful update:
AllNodesAtLatestRevision
3 nodes are at revision 7 (1)
1 | In this example, the latest revision number is 7 . |
If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.
Force a new rollout for the Kubernetes controller manager:
$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Verify all nodes are updated to the latest revision.
$ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision
upon successful update:
AllNodesAtLatestRevision
3 nodes are at revision 7 (1)
1 | In this example, the latest revision number is 7 . |
If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.
Force a new rollout for the Kubernetes scheduler:
$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Verify all nodes are updated to the latest revision.
$ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision
upon successful update:
AllNodesAtLatestRevision
3 nodes are at revision 7 (1)
1 | In this example, the latest revision number is 7 . |
If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.
Verify that all control plane hosts have started and joined the cluster.
In a terminal that has access to the cluster as a cluster-admin
user, run the following command:
$ oc -n openshift-etcd get pods -l k8s-app=etcd
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h
etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h
etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
To ensure that all workloads return to normal operation following a recovery procedure, restart each pod that stores Kubernetes API information. This includes OpenShift Container Platform components such as routers, Operators, and third-party components.
On completion of the previous procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using Consider using the
Issue the following command to display your authenticated user name:
|
If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an elasticsearch cluster running in a pod or a database running in a StatefulSet
object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.
The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. |
The following are some example scenarios that produce an out-of-date status:
MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.
Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.
Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.
A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id
or /dev
directories. This situation might cause the local PVs to refer to devices that no longer exist.
To fix this problem, an administrator must:
Manually remove the PVs with invalid devices.
Remove symlinks from respective nodes.
Delete LocalVolume
or LocalVolumeSet
objects (see Storage → Configuring persistent storage → Persistent storage using local volumes → Deleting the Local Storage Operator Resources).