This is a cache of https://docs.openshift.com/container-platform/3.10/upgrading/downgrade.html. It is a snapshot of the page at 2024-11-24T02:28:10.355+0000.
Downgrading | Upgrading Clusters | OpenShift Container Platform 3.10
×

Overview

Following an OpenShift Container Platform upgrade, it may be desirable in extreme cases to downgrade your cluster to a previous version. The following sections outline the required steps for each system in a cluster to perform such a downgrade for the OpenShift Container Platform 3.10 to 3.9 downgrade path.

These steps are currently only supported for RPM-based installations of OpenShift Container Platform and assumes downtime of the entire cluster.

Verifying Backups

  1. The Ansible playbook used during the upgrade process should have created a backup of the master-config.yaml file. Ensure this and the scheduler.json file exist on your masters:

    /etc/origin/master/master-config.yaml.<timestamp>
    /etc/origin/master/scheduler.json
  2. The procedure in Preparing for an Automated Upgrade instructed you to back up the following files before proceeding with an upgrade from OpenShift Container Platform 3.9 to 3.10; ensure that you still have these files available.

    On master hosts:

    /usr/lib/systemd/system/atomic-openshift-master-api.service
    /usr/lib/systemd/system/atomic-openshift-master-controllers.service
    /etc/sysconfig/atomic-openshift-master-api
    /etc/sysconfig/atomic-openshift-master-controllers

    On node and master hosts:

    /usr/lib/systemd/system/atomic-openshift-*.service
    /etc/origin/node/node-config.yaml

    On etcd hosts, including masters that have etcd co-located on them:

    /etc/etcd/etcd.conf
    /backup/etcd-xxxxxx/backup.db

Shutting Down the Cluster

  1. On all master and node hosts, stop the master and node services by removing the pod definition and rebooting the host:

    # mkdir -p /etc/origin/node/pods-stopped
    # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
    # reboot

Removing RPMs and Static Pods

The *-excluder packages add entries to the exclude directive in the host’s /etc/yum.conf file when installed.

  1. On all masters, nodes, and etcd members (if using a dedicated etcd cluster), remove the following packages:

    # yum remove atomic-openshift \
        atomic-openshift-clients \
        atomic-openshift-node \
        atomic-openshift-master \
        atomic-openshift-sdn-ovs \
        atomic-openshift-excluder \
        atomic-openshift-docker-excluder \
        atomic-openshift-hyperkube
  2. Verify the packages were removed successfully:

    # rpm -qa | grep atomic-openshift
  3. On control plane hosts (master and etcd hosts), move the static pod definitions:

    # mkdir /etc/origin/node/pods-backup
    # mv /etc/origin/node/pods/* /etc/origin/node/pods-backup/
  4. Reboot each host:

    # reboot

Downgrading Docker

Both OpenShift Container Platform 3.9 and 3.10 require Docker 1.13, so Docker does not need to be downgraded.

Reinstalling RPMs

  1. Disable the OpenShift Container Platform 3.10 repositories, and re-enable the 3.9 repositories:

    # subscription-manager repos \
        --disable=rhel-7-server-ose-3.10-rpms \
        --enable=rhel-7-server-ose-3.9-rpms
  2. On each master, install the following packages:

    # yum install atomic-openshift \
        atomic-openshift-clients \
        atomic-openshift-node \
        atomic-openshift-master \
        openvswitch \
        atomic-openshift-sdn-ovs \
        tuned-profiles-atomic-openshift-node \
        atomic-openshift-excluder \
        atomic-openshift-docker-excluder
  3. On each node, install the following packages:

    # yum install atomic-openshift \
        atomic-openshift-node \
        openvswitch \
        atomic-openshift-sdn-ovs \
        tuned-profiles-atomic-openshift-node \
        atomic-openshift-excluder \
        atomic-openshift-docker-excluder
  4. On each host, verify the packages were installed successfully:

    # rpm -qa | grep atomic-openshift
    # rpm -q openvswitch

Restoring etcd

The restore procedure for etcd configuration files replaces the appropriate files, then restarts the service or static pod.

If an etcd host has become corrupted and the /etc/etcd/etcd.conf file is lost, restore it using:

$ ssh master-0
# cp /backup/yesterday/master-0-files/etcd.conf /etc/etcd/etcd.conf
# restorecon -Rv /etc/etcd/etcd.conf

In this example, the backup file is stored in the /backup/yesterday/master-0-files/etcd.conf path where it can be used as an external NFS share, S3 bucket, or other storage solution.

If you run etcd as a static pod, follow only the steps in that section. If you run etcd as a separate service on either master or standalone nodes, follow the steps to restore v2 or v3 data as required.

Restoring etcd v3 snapshot

Snapshot integrity may be optionally verified at restore time. If the snapshot is taken with etcdctl snapshot save, it will have an integrity hash that is checked by etcdctl snapshot restore. If the snapshot is copied from the data directory, there is no integrity hash and it will only restore by using --skip-hash-check.

The procedure to restore the data must be performed on a single etcd host. You can then add the rest of the nodes to the cluster.

Procedure

  1. Unmask the etcd service:

    # systemctl unmask etcd
  2. Stop all etcd services by removing the etcd pod definition and rebooting the host:

    # mkdir -p /etc/origin/node/pods-stopped
    # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
    # reboot
  3. Clear all old data, because etcdctl recreates it in the node where the restore procedure is going to be performed:

    # rm -Rf /var/lib/etcd
  4. Run the snapshot restore command, substituting the values from the /etc/etcd/etcd.conf file:

    # etcdctl3 snapshot restore /backup/etcd-xxxxxx/backup.db \
      --data-dir /var/lib/etcd \
      --name master-0.example.com \
      --initial-cluster "master-0.example.com=https://192.168.55.8:2380" \
      --initial-cluster-token "etcd-cluster-1" \
      --initial-advertise-peer-urls https://192.168.55.8:2380 \
      --skip-hash-check=true
    
    2017-10-03 08:55:32.440779 I | mvcc: restore compact to 1041269
    2017-10-03 08:55:32.468244 I | etcdserver/membership: added member 40bef1f6c79b3163 [https://192.168.55.8:2380] to cluster 26841ebcf610583c
  5. Restore permissions and selinux context to the restored files:

    # restorecon -Rv /var/lib/etcd
  6. Start the etcd service:

    # systemctl start etcd
  7. Check for any error messages:

    # journalctl -fu etcd.service

Restoring etcd on a static pod

Before restoring etcd on a static pod:

  • etcdctl binaries must be available or, in containerized installations, the rhel7/etcd container must be available.

    You can obtain etcd by running the following commands:

    $ git clone https://github.com/coreos/etcd.git
    $ cd etcd
    $ ./build

To restore etcd on a static pod:

  1. If the pod is running, stop the etcd pod by moving the pod manifest YAML file to another directory:

    $ mv /etc/origin/node/pods/etcd.yaml .
  2. Clear all old data:

    $ rm -rf /var/lib/etcd

    You use the etcdctl to recreate the data in the node where you restore the pod.

  3. Restore the etcd snapshot to the mount path for the etcd pod:

    $ export ETCDCTL_API=3
    $ etcdctl snapshot restore /etc/etcd/backup/etcd/snapshot.db
    	 --data-dir /var/lib/etcd/
    	 --name ip-172-18-3-48.ec2.internal
    	 --initial-cluster "ip-172-18-3-48.ec2.internal=https://172.18.3.48:2380"
    	 --initial-cluster-token "etcd-cluster-1"
    	 --initial-advertise-peer-urls https://172.18.3.48:2380
    	 --skip-hash-check=true

    Obtain the values for your cluster from the $/backup_files/etcd.conf file.

  4. Set required permissions and selinux context on the data directory:

    $ restorecon -Rv /var/lib/etcd/
  5. Restart the etcd pod by moving the pod manifest YAML file to the required directory:

    $ mv etcd.yaml /etc/origin/node/pods/.

Adding etcd nodes after restoring

After the first instance is running, you can add multiple etcd servers to your cluster.

Procedure
  1. Get the etcd name for the instance in the ETCD_NAME variable:

    # grep ETCD_NAME /etc/etcd/etcd.conf
  2. Get the IP address where etcd listens for peer communication:

    # grep ETCD_INITIAL_ADVERTISE_PEER_URLS /etc/etcd/etcd.conf
  3. If the node was previously part of a etcd cluster, delete the previous etcd data:

    # rm -Rf /var/lib/etcd/*
  4. On the etcd host where etcd is properly running, add the new member:

    # etcdctl3 member add *<name>* \
      --peer-urls="*<advertise_peer_urls>*"

    The command outputs some variables. For example:

    ETCD_NAME="master2"
    ETCD_INITIAL_CLUSTER="master-0.example.com=https://192.168.55.8:2380"
    ETCD_INITIAL_CLUSTER_STATE="existing"
  5. Add the values from the previous command to the /etc/etcd/etcd.conf file of the new host:

    # vi /etc/etcd/etcd.conf
  6. Start the etcd service in the node joining the cluster:

    # systemctl start etcd.service
  7. Check for error messages:

    # journalctl -fu etcd.service
  8. Repeat the previous steps for every etcd node to be added.

  9. Once you add all the nodes, verify the cluster status and cluster health:

    # etcdctl3 endpoint health --endpoints="https://<etcd_host1>:2379,https://<etcd_host2>:2379,https://<etcd_host3>:2379"
    https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 1.423459ms
    https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.767481ms
    https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.599694ms
    
    # etcdctl3 endpoint status --endpoints="https://<etcd_host1>:2379,https://<etcd_host2>:2379,https://<etcd_host3>:2379"
    https://master-0.example.com:2379, 40bef1f6c79b3163, 3.2.5, 28 MB, true, 9, 2878
    https://master-1.example.com:2379, 1ea57201a3ff620a, 3.2.5, 28 MB, false, 9, 2878
    https://master-2.example.com:2379, 59229711e4bc65c8, 3.2.5, 28 MB, false, 9, 2878

Bringing OpenShift Container Platform services back online

After you finish your changes, bring OpenShift Container Platform back online.

Procedure

  1. On each OpenShift Container Platform master, restore your master and node configuration from backup and enable and restart all relevant services:

    # cp ${MYBACKUPDIR}/etc/origin/node/pods/* /etc/origin/node/pods/
    # cp ${MYBACKUPDIR}/etc/origin/master/master.env /etc/origin/master/master.env
    # cp ${MYBACKUPDIR}/etc/origin/master/master-config.yaml.<timestamp> /etc/origin/master/master-config.yaml
    # cp ${MYBACKUPDIR}/etc/origin/node/node-config.yaml.<timestamp> /etc/origin/node/node-config.yaml
    # cp ${MYBACKUPDIR}/etc/origin/master/scheduler.json.<timestamp> /etc/origin/master/scheduler.json
    # cp ${MYBACKUPDIR}/usr/lib/systemd/system/atomic-openshift-master-api.service /usr/lib/systemd/system/atomic-openshift-master-api.service
    # cp ${MYBACKUPDIR}/usr/lib/systemd/system/atomic-openshift-master-controllers.service /usr/lib/systemd/system/atomic-openshift-master-controllers.service
    # rm /etc/systemd/system/atomic-openshift-node.service
    # systemctl daemon-reload
    # master-restart api
    # master-restart controllers
  2. On each OpenShift Container Platform node, update the node configuration maps as needed, and enable and restart the atomic-openshift-node service:

    # cp /etc/origin/node/node-config.yaml.<timestamp> /etc/origin/node/node-config.yaml
    # rm /etc/systemd/system/atomic-openshift-node.service
    # systemctl daemon-reload
    # systemctl enable atomic-openshift-node
    # systemctl start atomic-openshift-node

Verifying the Downgrade

  1. To verify the downgrade, first check that all nodes are marked as Ready:

    # oc get nodes
    NAME                        STATUS                     AGE
    master.example.com          Ready,SchedulingDisabled   165d
    node1.example.com           Ready                      165d
    node2.example.com           Ready                      165d
  2. Verify the successful downgrade of the registry and router, if deployed:

    1. Verify you are running the v3.9 versions of the docker-registry and router images:

      # oc get -n default dc/docker-registry -o json | grep \"image\"
          "image": "openshift3/ose-docker-registry:v3.9",
      # oc get -n default dc/router -o json | grep \"image\"
          "image": "openshift3/ose-haproxy-router:v3.9",
    2. Verify that docker-registry and router pods are running and in ready state:

      # oc get pods -n default
      
      NAME                       READY     STATUS    RESTARTS   AGE
      docker-registry-2-b7xbn    1/1       Running   0          18m
      router-2-mvq6p             1/1       Running   0          6m
  3. Use the diagnostics tool on the master to look for common issues and provide suggestions:

    # oc adm diagnostics
    ...
    [Note] Summary of diagnostics execution:
    [Note] Completed with no errors or warnings seen.