/etc/origin/master/master-config.yaml.<timestamp> /etc/origin/master/scheduler.json
Following an OpenShift Container Platform upgrade, it may be desirable in extreme cases to downgrade your cluster to a previous version. The following sections outline the required steps for each system in a cluster to perform such a downgrade for the OpenShift Container Platform 3.10 to 3.9 downgrade path.
These steps are currently only supported for RPM-based installations of OpenShift Container Platform and assumes downtime of the entire cluster. |
The Ansible playbook used during the upgrade process should have created a backup of the master-config.yaml file. Ensure this and the scheduler.json file exist on your masters:
/etc/origin/master/master-config.yaml.<timestamp> /etc/origin/master/scheduler.json
The procedure in Preparing for an Automated Upgrade instructed you to back up the following files before proceeding with an upgrade from OpenShift Container Platform 3.9 to 3.10; ensure that you still have these files available.
On master hosts:
/usr/lib/systemd/system/atomic-openshift-master-api.service /usr/lib/systemd/system/atomic-openshift-master-controllers.service /etc/sysconfig/atomic-openshift-master-api /etc/sysconfig/atomic-openshift-master-controllers
On node and master hosts:
/usr/lib/systemd/system/atomic-openshift-*.service /etc/origin/node/node-config.yaml
On etcd hosts, including masters that have etcd co-located on them:
/etc/etcd/etcd.conf /backup/etcd-xxxxxx/backup.db
On all master and node hosts, stop the master and node services by removing the pod definition and rebooting the host:
# mkdir -p /etc/origin/node/pods-stopped # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/ # reboot
The *-excluder packages add entries to the exclude directive in the host’s /etc/yum.conf file when installed.
On all masters, nodes, and etcd members (if using a dedicated etcd cluster), remove the following packages:
# yum remove atomic-openshift \ atomic-openshift-clients \ atomic-openshift-node \ atomic-openshift-master \ atomic-openshift-sdn-ovs \ atomic-openshift-excluder \ atomic-openshift-docker-excluder \ atomic-openshift-hyperkube
Verify the packages were removed successfully:
# rpm -qa | grep atomic-openshift
On control plane hosts (master and etcd hosts), move the static pod definitions:
# mkdir /etc/origin/node/pods-backup # mv /etc/origin/node/pods/* /etc/origin/node/pods-backup/
Reboot each host:
# reboot
Both OpenShift Container Platform 3.9 and 3.10 require Docker 1.13, so Docker does not need to be downgraded.
Disable the OpenShift Container Platform 3.10 repositories, and re-enable the 3.9 repositories:
# subscription-manager repos \ --disable=rhel-7-server-ose-3.10-rpms \ --enable=rhel-7-server-ose-3.9-rpms
On each master, install the following packages:
# yum install atomic-openshift \ atomic-openshift-clients \ atomic-openshift-node \ atomic-openshift-master \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node \ atomic-openshift-excluder \ atomic-openshift-docker-excluder
On each node, install the following packages:
# yum install atomic-openshift \ atomic-openshift-node \ openvswitch \ atomic-openshift-sdn-ovs \ tuned-profiles-atomic-openshift-node \ atomic-openshift-excluder \ atomic-openshift-docker-excluder
On each host, verify the packages were installed successfully:
# rpm -qa | grep atomic-openshift # rpm -q openvswitch
The restore procedure for etcd configuration files replaces the appropriate files, then restarts the service or static pod.
If an etcd host has become corrupted and the /etc/etcd/etcd.conf
file is lost,
restore it using:
$ ssh master-0 # cp /backup/yesterday/master-0-files/etcd.conf /etc/etcd/etcd.conf # restorecon -Rv /etc/etcd/etcd.conf
In this example, the backup file is stored in the
/backup/yesterday/master-0-files/etcd.conf
path where it can be used as an
external NFS share, S3 bucket, or other storage solution.
If you run etcd as a static pod, follow only the steps in that section. If you run etcd as a separate service on either master or standalone nodes, follow the steps to restore v2 or v3 data as required. |
Snapshot integrity may be optionally verified at restore time. If the snapshot
is taken with etcdctl snapshot save
, it will have an integrity hash that is
checked by etcdctl snapshot restore
. If the snapshot is copied from the data
directory, there is no integrity hash and it will only restore by using
--skip-hash-check
.
The procedure to restore the data must be performed on a single etcd host. You can then add the rest of the nodes to the cluster. |
Unmask the etcd service:
# systemctl unmask etcd
Stop all etcd services by removing the etcd pod definition and rebooting the host:
# mkdir -p /etc/origin/node/pods-stopped # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/ # reboot
Clear all old data, because etcdctl
recreates it in the node where the
restore procedure is going to be performed:
# rm -Rf /var/lib/etcd
Run the snapshot restore
command, substituting the values from the
/etc/etcd/etcd.conf
file:
# etcdctl3 snapshot restore /backup/etcd-xxxxxx/backup.db \ --data-dir /var/lib/etcd \ --name master-0.example.com \ --initial-cluster "master-0.example.com=https://192.168.55.8:2380" \ --initial-cluster-token "etcd-cluster-1" \ --initial-advertise-peer-urls https://192.168.55.8:2380 \ --skip-hash-check=true 2017-10-03 08:55:32.440779 I | mvcc: restore compact to 1041269 2017-10-03 08:55:32.468244 I | etcdserver/membership: added member 40bef1f6c79b3163 [https://192.168.55.8:2380] to cluster 26841ebcf610583c
Restore permissions and selinux
context to the restored files:
# restorecon -Rv /var/lib/etcd
Start the etcd service:
# systemctl start etcd
Check for any error messages:
# journalctl -fu etcd.service
Before restoring etcd on a static pod:
etcdctl
binaries must be available or, in containerized installations,
the rhel7/etcd
container must be available.
You can obtain etcd by running the following commands:
$ git clone https://github.com/coreos/etcd.git $ cd etcd $ ./build
To restore etcd on a static pod:
If the pod is running, stop the etcd pod by moving the pod manifest YAML file to another directory:
$ mv /etc/origin/node/pods/etcd.yaml .
Clear all old data:
$ rm -rf /var/lib/etcd
You use the etcdctl to recreate the data in the node where you restore the pod.
Restore the etcd snapshot to the mount path for the etcd pod:
$ export etcdCTL_API=3 $ etcdctl snapshot restore /etc/etcd/backup/etcd/snapshot.db --data-dir /var/lib/etcd/ --name ip-172-18-3-48.ec2.internal --initial-cluster "ip-172-18-3-48.ec2.internal=https://172.18.3.48:2380" --initial-cluster-token "etcd-cluster-1" --initial-advertise-peer-urls https://172.18.3.48:2380 --skip-hash-check=true
Obtain the values for your cluster from the $/backup_files/etcd.conf file.
Set required permissions and selinux context on the data directory:
$ restorecon -Rv /var/lib/etcd/
Restart the etcd pod by moving the pod manifest YAML file to the required directory:
$ mv etcd.yaml /etc/origin/node/pods/.
After the first instance is running, you can add multiple etcd servers to your cluster.
Get the etcd name for the instance in the etcd_NAME
variable:
# grep etcd_NAME /etc/etcd/etcd.conf
Get the IP address where etcd listens for peer communication:
# grep etcd_INITIAL_ADVERTISE_PEER_URLS /etc/etcd/etcd.conf
If the node was previously part of a etcd cluster, delete the previous etcd data:
# rm -Rf /var/lib/etcd/*
On the etcd host where etcd is properly running, add the new member:
# etcdctl3 member add *<name>* \ --peer-urls="*<advertise_peer_urls>*"
The command outputs some variables. For example:
etcd_NAME="master2" etcd_INITIAL_CLUSTER="master-0.example.com=https://192.168.55.8:2380" etcd_INITIAL_CLUSTER_STATE="existing"
Add the values from the previous command to the /etc/etcd/etcd.conf
file of the new host:
# vi /etc/etcd/etcd.conf
Start the etcd service in the node joining the cluster:
# systemctl start etcd.service
Check for error messages:
# journalctl -fu etcd.service
Repeat the previous steps for every etcd node to be added.
Once you add all the nodes, verify the cluster status and cluster health:
# etcdctl3 endpoint health --endpoints="https://<etcd_host1>:2379,https://<etcd_host2>:2379,https://<etcd_host3>:2379" https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 1.423459ms https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.767481ms https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.599694ms # etcdctl3 endpoint status --endpoints="https://<etcd_host1>:2379,https://<etcd_host2>:2379,https://<etcd_host3>:2379" https://master-0.example.com:2379, 40bef1f6c79b3163, 3.2.5, 28 MB, true, 9, 2878 https://master-1.example.com:2379, 1ea57201a3ff620a, 3.2.5, 28 MB, false, 9, 2878 https://master-2.example.com:2379, 59229711e4bc65c8, 3.2.5, 28 MB, false, 9, 2878
After you finish your changes, bring OpenShift Container Platform back online.
On each OpenShift Container Platform master, restore your master and node configuration from backup and enable and restart all relevant services:
# cp ${MYBACKUPDIR}/etc/origin/node/pods/* /etc/origin/node/pods/ # cp ${MYBACKUPDIR}/etc/origin/master/master.env /etc/origin/master/master.env # cp ${MYBACKUPDIR}/etc/origin/master/master-config.yaml.<timestamp> /etc/origin/master/master-config.yaml # cp ${MYBACKUPDIR}/etc/origin/node/node-config.yaml.<timestamp> /etc/origin/node/node-config.yaml # cp ${MYBACKUPDIR}/etc/origin/master/scheduler.json.<timestamp> /etc/origin/master/scheduler.json # cp ${MYBACKUPDIR}/usr/lib/systemd/system/atomic-openshift-master-api.service /usr/lib/systemd/system/atomic-openshift-master-api.service # cp ${MYBACKUPDIR}/usr/lib/systemd/system/atomic-openshift-master-controllers.service /usr/lib/systemd/system/atomic-openshift-master-controllers.service # rm /etc/systemd/system/atomic-openshift-node.service # systemctl daemon-reload # master-restart api # master-restart controllers
On each OpenShift Container Platform node, update the node configuration maps as needed, and enable and restart the atomic-openshift-node service:
# cp /etc/origin/node/node-config.yaml.<timestamp> /etc/origin/node/node-config.yaml # rm /etc/systemd/system/atomic-openshift-node.service # systemctl daemon-reload # systemctl enable atomic-openshift-node # systemctl start atomic-openshift-node
To verify the downgrade, first check that all nodes are marked as Ready:
# oc get nodes NAME STATUS AGE master.example.com Ready,SchedulingDisabled 165d node1.example.com Ready 165d node2.example.com Ready 165d
Verify the successful downgrade of the registry and router, if deployed:
Verify you are running the v3.9
versions of the docker-registry
and router images:
# oc get -n default dc/docker-registry -o json | grep \"image\" "image": "openshift3/ose-docker-registry:v3.9", # oc get -n default dc/router -o json | grep \"image\" "image": "openshift3/ose-haproxy-router:v3.9",
Verify that docker-registry and router pods are running and in ready state:
# oc get pods -n default NAME READY STATUS RESTARTS AGE docker-registry-2-b7xbn 1/1 Running 0 18m router-2-mvq6p 1/1 Running 0 6m
Use the diagnostics tool on the master to look for common issues and provide suggestions:
# oc adm diagnostics ... [Note] Summary of diagnostics execution: [Note] Completed with no errors or warnings seen.