This is a cache of https://docs.okd.io/3.9/upgrading/automated_upgrades.html. It is a snapshot of the page at 2024-11-25T04:09:11.301+0000.
Automated In-place Upgrades | Upgrading Clusters | OKD 3.9
×

Overview

If you installed using the advanced installation and the inventory file that was used is available, you can use upgrade playbooks to automate the OpenShift cluster upgrade process.

The OKD 3.9 release includes a merge of features and fixes from Kubernetes 1.8 and 1.9. As a result, the upgrade process from OKD 3.7 completes with the cluster fully upgraded to OKD 3.9, seemingly "skipping" the 3.8 release. Technically, the OKD 3.7 cluster is first upgraded to 3.8-versioned packages, and then the process immediately continues upgrading to OKD 3.9 automatically. Your cluster should only remain at 3.8-versioned packages for as long as it takes to successfully complete the upgrade to OKD 3.9.

As of OKD 3.9, the quick installation method is deprecated. In a future release, it will be removed completely. In addition, using the quick installer to upgrade from version 3.7 to 3.9 is not supported.

The automated 3.7 to 3.9 control plane upgrade performs the following steps for you:

  • A backup of all etcd data is taken for recovery purposes.

  • The API and controllers are updated from 3.7 to 3.8.

  • Internal data structures are updated to 3.8.

  • A second backup of all etcd data is taken for recovery purposes.

  • The API and controllers are updated from 3.8 to 3.9.

  • Internal data structures are updated to 3.9.

  • The default router, if one exists, is updated from 3.7 to 3.9.

  • The default registry, if one exists, is updated from 3.7 to 3.9.

  • The default image streams and InstantApp templates are updated.

The automated 3.7 to 3.9 node upgrade performs a rolling update of nodes, which:

  • Marks a subset of nodes unschedulable and drains them of pods.

  • Updates node components from 3.7 to 3.9 (including openvswitch and container runtime).

  • Returns those nodes to service.

  • Ensure that you have met all prerequisites before proceeding with an upgrade. Failure to do so can result in a failed upgrade.

  • If you are using GlusterFS, see Special Considerations When Using Containerized GlusterFS before proceeding.

  • If you are using GCE Persistent Disk (gcePD), see Special Considerations When Using gcePD before proceeding.

  • The day before the upgrade, validate OKD storage migration to ensure potential issues are resolved prior to the outage window:

    $ oc adm migrate storage --include=* --loglevel=2 --confirm --config /etc/origin/master/admin.kubeconfig

Automated upgrade playbooks are run via Ansible directly using the ansible-playbook command with an inventory file, similar to the advanced installation method. The same v3_9 upgrade playbooks can be used for either of the following scenarios:

Running Ansible playbooks with the --tags or --check options is not supported by Red Hat.

Running Upgrade Playbooks

Ensure that you have the latest openshift-ansible code checked out:

# cd ~/openshift-ansible
# git pull https://github.com/openshift/openshift-ansible master

Then run one of the following upgrade playbooks utilizing the inventory file you used during the advanced installation. If your inventory file is located somewhere other than the default /etc/ansible/hosts, add the -i flag to specify the location.

Upgrading to OpenShift Origin 1.1

To upgrade from OpenShift Origin 1.0 to 1.1, run the following playbook:

# ansible-playbook \
    -i </path/to/inventory/file> \
    playbooks/byo/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml

The v3_0_to_v3_1 in the above path is a reference to the related OpenShift Enterprise versions, however it is also the correct playbook to use when upgrading from OpenShift Origin 1.0 to 1.1.

Upgrading to OpenShift Origin 1.1.z Releases

To upgrade an existing OpenShift Origin 1.1 cluster to the latest 1.1.z release, run the following playbook:

# ansible-playbook \
    -i </path/to/inventory/file> \
    playbooks/byo/openshift-cluster/upgrades/v3_1_minor/upgrade.yml

The v3_1_minor in the above path is a reference to the related OpenShift Enterprise versions, however it is also the correct playbook to use when upgrading from OpenShift Origin 1.1 to the latest 1.1.z release.

After rebooting, continue to Verifying the Upgrade.

  1. You must disable swap memory in your cluster before upgrading to OKD 3.9, otherwise the upgrade will fail. Whether swap memory was enabled using openshift_disable_swap=false in your Ansible inventory file or enabled manually per host, see Disabling Swap Memory in the Cluster Administration guide to disable it on each host.

    1. For any upgrade path, always ensure that you have the latest version of the atomic-openshift-utils package on each RHEL 7 system, which also updates the openshift-ansible-* packages:

      # yum update atomic-openshift-utils
    2. If you have applied manual configuration changes to your master or node configuration files since your last Ansible playbook run (whether that was initial installation or your most recent cluster upgrade), and you have not yet made the equivalent changes to your inventory file, review Configuring Ansible Inventory Files. For any variables that are relevant to the manual changes you made, apply the equivalent appropriate changes to your inventory files before running the upgrade. Otherwise, your manual changes may be overwritten by default values during the upgrade, which could cause pods to not run properly or other cluster stability issues.

      In particular, if you made any changes to admissionConfig settings in your master configuration files, review the openshift_master_admission_plugin_config variable in Configuring Ansible Inventory Files. Failure to do so could cause pods to get stuck in Pending state if you had ClusterResourceOverride settings manually configured previously (as described in Configuring Masters for Overcommitment).

After satisfying these steps, you can review the following sections for more information on how the upgrade process works and make decisions on additional upgrade customization options if you so choose. When you are prepared to run the upgrade, you can continue to Upgrading to the Latest OKD 3.9 Release.

Upgrading the Control Plane and Nodes in Separate Phases

An OKD cluster can be upgraded in one or more phases. You can choose whether to upgrade all hosts in one phase by running a single Ansible playbook, or upgrade the control plane (master components) and nodes in multiple phases using separate playbooks.

Instructions on the full upgrade process and when to call these playbooks are described in Upgrading to the Latest OKD 3.9 Release.

If your OKD cluster uses GlusterFS pods, you must perform the upgrade in multiple phases. See Special Considerations When Using Containerized GlusterFS for details on how to upgrade with GlusterFS.

When upgrading in separate phases, the control plane phase includes upgrading:

  • master components

  • node services running on masters

  • Docker running on masters

  • Docker running on any stand-alone etcd hosts

When upgrading only the nodes, the control plane must already be upgraded. The node phase includes upgrading:

  • node services running on stand-alone nodes

  • Docker running on stand-alone nodes

Nodes running master components are not included during the node upgrade phase, even though they have node services and Docker running on them. Instead, they are upgraded as part of the control plane upgrade phase. This ensures node services and Docker on masters are not upgraded twice (once during the control plane phase and again during the node phase).

Customizing Node Upgrades

Whether upgrading in a single or multiple phases, you can customize how the node portion of the upgrade progresses by passing certain Ansible variables to an upgrade playbook using the -e option.

Instructions on the full upgrade process and when to call these playbooks are described in Upgrading to the Latest OKD 3.7 Release.

The openshift_upgrade_nodes_serial variable can be set to an integer or percentage to control how many node hosts are upgraded at the same time. The default is 1, upgrading nodes one at a time.

For example, to upgrade 20 percent of the total number of detected nodes at a time:

$ ansible-playbook -i <path/to/inventory/file> \
    </path/to/upgrade/playbook> \
    -e openshift_upgrade_nodes_serial="20%"

The openshift_upgrade_nodes_label variable allows you to specify that only nodes with a certain label are upgraded. This can also be combined with the openshift_upgrade_nodes_serial variable.

For example, to only upgrade nodes in the group1 region, two at a time:

$ ansible-playbook -i <path/to/inventory/file> \
    </path/to/upgrade/playbook> \
    -e openshift_upgrade_nodes_serial="2" \
    -e openshift_upgrade_nodes_label="region=group1"

See Managing Nodes for more on node labels.

The openshift_upgrade_nodes_max_fail_percentage variable allows you to specify how many nodes may fail in each batch. The percentage of failure must exceed your value before the playbook aborts the upgrade.

The openshift_upgrade_nodes_drain_timeout variable allows you to specify the length of time to wait before giving up.

In this example, 10 nodes are upgraded at a time, the upgrade will abort if more than 20 percent of the nodes fail, and there is a 600-second wait to drain the node:

$ ansible-playbook -i <path/to/inventory/file> \
    </path/to/upgrade/playbook> \
    -e openshift_upgrade_nodes_serial=10 \
    -e openshift_upgrade_nodes_max_fail_percentage=20 \
    -e openshift_upgrade_nodes_drain_timeout=600

Customizing Upgrades With Ansible Hooks

When upgrading OKD, you can execute custom tasks during specific operations through a system called hooks. Hooks allow cluster administrators to provide files defining tasks to execute before and/or after specific areas during upgrades. This can be very helpful to validate or modify custom infrastructure when upgrading OKD.

It is important to remember that when a hook fails, the operation fails. This means a good hook can run multiple times and provide the same results. A great hook is idempotent.

Limitations

  • Hooks have no defined or versioned interface. They can use internal openshift-ansible variables, but there is no guarantee these will remain in future releases. In the future, hooks may be versioned, giving you advance warning that your hook needs to be updated to work with the latest openshift-ansible.

  • Hooks have no error handling, so an error in a hook will halt the upgrade process. The problem will need to be addressed and the upgrade re-run.

Using Hooks

Hooks are defined in the hosts inventory file under the OSEv3:vars section.

Each hook must point to a YAML file which defines Ansible tasks. This file will be used as an include, meaning that the file cannot be a playbook, but a set of tasks. Best practice suggests using absolute paths to the hook file to avoid any ambiguity.

Example Hook Definitions in an Inventory File
[OSEv3:vars]
openshift_master_upgrade_pre_hook=/usr/share/custom/pre_master.yml
openshift_master_upgrade_hook=/usr/share/custom/master.yml
openshift_master_upgrade_post_hook=/usr/share/custom/post_master.yml
Example pre_master.yml Task
---
# Trivial example forcing an operator to ack the start of an upgrade
# file=/usr/share/custom/pre_master.yml

- name: note the start of a master upgrade
  debug:
      msg: "Master upgrade of {{ inventory_hostname }} is about to start"

- name: require an operator agree to start an upgrade
  pause:
      prompt: "Hit enter to start the master upgrade"

Available Upgrade Hooks

openshift_master_upgrade_pre_hook
  • Runs before each master is upgraded.

  • This hook runs against each master in serial.

  • If a task must run against a different host, said task must use delegate_to or local_action.

openshift_master_upgrade_hook
  • Runs after each master is upgraded, but before its service or system restart.

  • This hook runs against each master in serial.

  • If a task must run against a different host, said task must use delegate_to or local_action.

openshift_master_upgrade_post_hook
  • Runs after each master is upgraded and has had its service or system restart.

  • This hook runs against each master in serial.

  • If a task must run against a different host, said task must use delegate_to or local_action.

Upgrading to the Latest OKD 3.9 Release

To upgrade an existing OKD 3.7 or 3.9 cluster to the latest 3.9 release:

  1. Satisfy the steps in Preparing for an Automated Upgrade to ensure you are using the latest upgrade playbooks.

  2. Ensure the openshift_deployment_type parameter in your inventory file is set to openshift-enterprise.

  3. Starting with OKD 3.9, the OKD web console is deployed as a pod on masters during upgrade, and the openshift_web_console_prefix is introduced to deploy the web console with a customized image prefix. The template_service_broker_prefix is updated to match other components. If you use a customized docker-registry for your installation instead of registry.access.redhat.com, you must explicitly specify openshift_web_console_prefix and template_service_broker_prefix to point to the correct image prefix during upgrade:

    openshift_web_console_prefix=<registry_ip>:<port>/openshift3/ose-
    template_service_broker_prefix=<registry_ip>:<port>/openshift3/ose-
  4. If you want to enable rolling, full system restarts of the hosts, you can set the openshift_rolling_restart_mode parameter in your inventory file to system. Otherwise, the default value services performs rolling service restarts on HA masters, but does not reboot the systems. See Configuring Cluster Variables for details.

  5. At this point, you can choose to run the upgrade in a single or multiple phases. See Upgrading the Control Plane and Nodes in Separate Phases for more details which components are upgraded in each phase.

    If your inventory file is located somewhere other than the default /etc/ansible/hosts, add the -i flag to specify its location. If you previously used the atomic-openshift-installer command to run your installation, you can check ~/.config/openshift/hosts for the last inventory file that was used, if needed.

    • Option A) Upgrade control plane and nodes in a single phase.

      Run the upgrade.yml playbook to upgrade the cluster in a single phase using one playbook; the control plane is still upgraded first, then nodes in-place:

      # ansible-playbook -i </path/to/inventory/file> \
          /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade.yml
    • Option B) Upgrade the control plane and nodes in separate phases.

      1. To upgrade only the control plane, run the upgrade_control_plane.yaml playbook:

        # ansible-playbook -i </path/to/inventory/file> \
            /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade_control_plane.yml
      2. To upgrade only the nodes, run the upgrade_nodes.yaml playbook:

        # ansible-playbook -i </path/to/inventory/file> \
            /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade_nodes.yml \
            [-e <customized_node_upgrade_variables>] (1)
        1 See Customizing Node Upgrades for any desired <customized_node_upgrade_variables>.

        If you are upgrading the nodes in groups as described in Customizing Node Upgrades, continue invoking the upgrade_nodes.yml playbook until all nodes have been successfully upgraded.

  6. After all master and node upgrades have completed, reboot all hosts. After rebooting, if there are no additional features enabled, you can verify the upgrade. Otherwise, the next step depends on what additional features you have previously enabled.

    Feature Next Step

    Aggregated Logging

    Upgrade the EFK logging stack.

    Cluster Metrics

    Upgrade cluster metrics.

Updating Master and Node Certificates

The following steps may be required for any OpenShift cluster that was originally installed prior to the OpenShift Origin 1.0.8 release. This may include any and all updates from that version.

Node Certificates

With the 1.0.8 release, certificates for each of the kubelet nodes were updated to include the IP address of the node. Any node certificates generated before the 1.0.8 release may not contain the IP address of the node.

If a node is missing the IP address as part of its certificate, clients may refuse to connect to the kubelet endpoint. Usually this will result in errors regarding the certificate not containing an IP SAN.

In order to remedy this situation, you may need to manually update the certificates for your node.

Checking the Node’s Certificate

The following command can be used to determine which Subject Alternative Names (SANs) are present in the node’s serving certificate. In this example, the Subject Alternative Names are mynode, mynode.mydomain.com, and 1.2.3.4:

# openssl x509 -in /etc/origin/node/server.crt -text -noout | grep -A 1 "Subject Alternative Name"
X509v3 Subject Alternative Name:
DNS:mynode, DNS:mynode.mydomain.com, IP: 1.2.3.4

Ensure that the nodeIP value set in the /etc/origin/node/node-config.yaml file is present in the IP values from the Subject Alternative Names listed in the node’s serving certificate. If the nodeIP is not present, then it will need to be added to the node’s certificate.

If the nodeIP value is already contained within the Subject Alternative Names, then no further steps are required.

You will need to know the Subject Alternative Names and nodeIP value for the following steps.

Generating a New Node Certificate

If your current node certificate does not contain the proper IP address, then you must regenerate a new certificate for your node.

Node certificates will be regenerated on the master (or first master) and are then copied into place on node systems.

  1. Create a temporary directory in which to perform the following steps:

    # mkdir /tmp/node_certificate_update
    # cd /tmp/node_certificate_update
  2. Export the signing options:

    # export signing_opts="--signer-cert=/etc/origin/master/ca.crt \
        --signer-key=/etc/origin/master/ca.key \
        --signer-serial=/etc/origin/master/ca.serial.txt"
  3. Generate the new certificate:

    # oc adm ca create-server-cert --cert=server.crt \
      --key=server.key $signing_opts \
      --hostnames=<existing_SANs>,<nodeIP>

    For example, if the Subject Alternative Names from before were mynode, mynode.mydomain.com, and 1.2.3.4, and the nodeIP was 10.10.10.1, then you would need to run the following command:

    # oc adm ca create-server-cert --cert=server.crt \
      --key=server.key $signing_opts \
      --hostnames=mynode,mynode.mydomain.com,1.2.3.4,10.10.10.1

Replace Node Serving Certificates

Back up the existing /etc/origin/node/server.crt and /etc/origin/node/server.key files for your node:

# mv /etc/origin/node/server.crt /etc/origin/node/server.crt.bak
# mv /etc/origin/node/server.key /etc/origin/node/server.key.bak

You must now copy the new server.crt and server.key created in the temporary directory during the previous step:

# mv /tmp/node_certificate_update/server.crt /etc/origin/node/server.crt
# mv /tmp/node_certificate_update/server.key /etc/origin/node/server.key

After you have replaced the node’s certificate, restart the node service:

# systemctl restart origin-node

Master Certificates

With the 1.0.8 release, certificates for each of the masters were updated to include all names that pods may use to communicate with masters. Any master certificates generated before the 1.0.8 release may not contain these additional service names.

Checking the Master’s Certificate

The following command can be used to determine which Subject Alternative Names (SANs) are present in the master’s serving certificate. In this example, the Subject Alternative Names are mymaster, mymaster.mydomain.com, and 1.2.3.4:

# openssl x509 -in /etc/origin/master/master.server.crt -text -noout | grep -A 1 "Subject Alternative Name"
X509v3 Subject Alternative Name:
DNS:mymaster, DNS:mymaster.mydomain.com, IP: 1.2.3.4

Ensure that the following entries are present in the Subject Alternative Names for the master’s serving certificate:

Entry Example

Kubernetes service IP address

172.30.0.1

All master host names

master1.example.com

All master IP addresses

192.168.122.1

Public master host name in clustered environments

public-master.example.com

kubernetes

kubernetes.default

kubernetes.default.svc

kubernetes.default.svc.cluster.local

openshift

openshift.default

openshift.default.svc

openshift.default.svc.cluster.local

If these names are already contained within the Subject Alternative Names, then no further steps are required.

Generating a New Master Certificate

If your current master certificate does not contain all names from the list above, then you must generate a new certificate for your master:

  1. Back up the existing /etc/origin/master/master.server.crt and /etc/origin/master/master.server.key files for your master:

    # mv /etc/origin/master/master.server.crt /etc/origin/master/master.server.crt.bak
    # mv /etc/origin/master/master.server.key /etc/origin/master/master.server.key.bak
  2. Export the service names. These names will be used when generating the new certificate:

    # export service_names="kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local"
  3. You will need the first IP in the services subnet (the kubernetes service IP) as well as the values of masterIP, masterURL and publicMasterURL contained in the /etc/origin/master/master-config.yaml file for the following steps.

    The kubernetes service IP can be obtained with:

    # oc get svc/kubernetes --template='{{.spec.clusterIP}}'
  4. Generate the new certificate:

    # oc adm ca create-master-certs \
          --hostnames=<master_hostnames>,<master_IP_addresses>,<kubernetes_service_IP>,$service_names \ (1) (2) (3)
          --master=<internal_master_address> \ (4)
          --public-master=<public_master_address> \ (5)
          --cert-dir=/etc/origin/master/ \
          --overwrite=false
    1 Adjust <master_hostnames> to match your master host name. In a clustered environment, add all master host names.
    2 Adjust <master_IP_addresses> to match the value of masterIP. In a clustered environment, add all master IP addresses.
    3 Adjust <kubernetes_service_IP> to the first IP in the kubernetes services subnet.
    4 Adjust <internal_master_address> to match the value of masterURL.
    5 Adjust <public_master_address> to match the value of masterPublicURL.
  5. Restart master services. For single master deployments:

    # systemctl restart origin-master-api origin-master-controllers

    After the service restarts, the certificate update is complete.

Upgrading the EFK Logging Stack

To upgrade an existing EFK logging stack deployment, you must use the provided /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml Ansible playbook. This is the playbook to use if you were deploying logging for the first time on an existing cluster, but is also used to upgrade existing logging deployments.

  1. If you have any Elasticsearch SearchGuard indices in the following naming format, you need to delete and reseed the indices because the naming format has changed. Elastic search might not work as expected unless you update the indices:

    .searchguard.logging-es-*
    1. Run the following command to delete the SearchGuard indices:

      # oc exec -c elasticsearch <pod> -- es_util --query=.searchguard* -XDELETE
    2. Run the following command to reseed the SearchGuard indices:

      # for pod in $(oc get pods -l component=es -o jsonpath={.items[*].metadata.name}); do oc exec -c elasticsearch $pod -- es_seed_acl; done
  2. If you have not already done so, see Specifying Logging Ansible Variables in the Aggregating Container Logs topic and update your Ansible inventory file to at least set the following required variable within the [OSEv3:vars] section:

    [OSEv3:vars]
    
    openshift_logging_install_logging=true (1)
    openshift_logging_image_version=<tag> (2)
    1 Enables the ability to upgrade the logging stack.
    2 Replace <tag> with v3.9.102 for the latest version.
  3. Add any other openshift_logging_* variables that you want to specify to override the defaults, as described in Specifying Logging Ansible Variables.

  4. When you have finished updating your inventory file, follow the instructions in Deploying the EFK Stack to run the openshift-logging/config.yml playbook and complete the logging deployment upgrade.

If your Fluentd DeploymentConfig and DaemonSet for the EFK components are already set with:

        image: <image_name>:<vX.Y>
        imagePullPolicy: IfNotPresent

The latest version <image_name> might not be pulled if there is already one with the same <image_name:vX.Y> stored locally on the node where the pod is being re-deployed. If so, manually change the DeploymentConfig and DaemonSet to imagePullPolicy: Always to make sure it is re-pulled.

Upgrading Cluster Metrics

To upgrade an existing cluster metrics deployment, you must use the provided /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml Ansible playbook. This is the playbook to use if you were deploying metrics for the first time on an existing cluster, but is also used to upgrade existing metrics deployments.

  1. If you have not already done so, see Specifying Metrics Ansible Variables in the Enabling Cluster Metrics topic and update your Ansible inventory file to at least set the following required variables within the [OSEv3:vars] section:

    [OSEv3:vars]
    
    openshift_metrics_install_metrics=true (1)
    openshift_metrics_image_version=<tag> (2)
    openshift_metrics_hawkular_hostname=<fqdn> (3)
    openshift_metrics_cassandra_storage_type=(emptydir|pv|dynamic) (4)
    1 Enables the ability to upgrade the metrics deployment.
    2 Replace <tag> with v3.9.102 for the latest version.
    3 Used for the Hawkular Metrics route. Should correspond to a fully qualified domain name.
    4 Choose a type that is consistent with the previous deployment.
  2. Add any other openshift_metrics_* variables that you want to specify to override the defaults, as described in Specifying Metrics Ansible Variables.

  3. When you have finished updating your inventory file, follow the instructions in Deploying the Metrics Deployment to run the openshift-metrics/config.yml playbook and complete the metrics deployment upgrade.

Special Considerations for Large-scale Upgrades

For large-scale cluster upgrades, which involve at least 10 worker nodes and thousands of projects and pods, the API object storage migration should be performed prior to running the upgrade playbooks, and then again after the upgrade has successfully completed. Otherwise, the upgrade process will fail.

Refer to the Running the pre- and post- API server model object migration outside of the upgrade window section of the Recommendations for large-scale OpenShift upgrades for further guidance.

Special Considerations for Mixed Environments

Before you upgrade a mixed environment, such as one with Red Hat Enterprise Linux (RHEL) and RHEL Atomic Host, set values in the inventory file for both the openshift_pkg_version and openshift_image_tag parameters. Setting these values ensures that all nodes in your cluster run the same version of OKD.

For example, to upgrade from OKD 3.7 to OKD 3.9, set the following parameters and values:

openshift_pkg_version=-3.9.74
openshift_image_tag=v3.9.74

These parameters can also be present in other, non-mixed, environments.

Special Considerations When Using Containerized GlusterFS

When upgrading OKD, you must upgrade the set of nodes where GlusterFS pods are running.

Use care when upgrading these nodes, as drain and unschedule will not terminate and evacuate the GlusterFS pods because they are running as part of a daemonset.

There is also the potential for someone to run an upgrade on multiple nodes at the same time. If this occurs, it would lead to data availability issues if more than one node was hosting GlusterFS pods.

Even if a serial upgrade is running, there is no guarantee sufficient time will be given for GlusterFS to complete all of its healing operations before GlusterFS on the next node is terminated. This could leave the cluster in a bad or unknown state. Therefore, the following procedure is recommended.

  1. Upgrade the control plane (the master nodes and etcd nodes).

  2. Upgrade standard infra nodes (router, registry, logging, and metrics).

    If any of the nodes in those groups are running GlusterFS, perform step 4 of this procedure at the same time. GlusterFS nodes must be upgraded along with other nodes in their class (app versus infra), one at a time.

  3. Upgrade standard nodes running application containers.

    If any of the nodes in those groups are running GlusterFS, perform step 4 of this procedure at the same time. GlusterFS nodes must be upgraded along with other nodes in their class (app versus infra), one at a time.

  4. Upgrade the OKD nodes running GlusterFS one at a time.

    1. Run oc get daemonset to verify the label found under NODE-SELECTOR. The default value is storagenode=glusterfs.

    2. Add a label (for example, type=upgrade) to the node you want to upgrade.

    3. Using the upgrade_nodes.yml playbook on the node where you terminated GlusterFS, run -e openshift_upgrade_nodes_label="type=upgrade".

      The upgrade_nodes.yml playbook is located here:

      # ansible-playbook -i </path/to/inventory/file> \
          /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade_nodes.yml \
          [-e <customized_node_upgrade_variables>] (1)
      1 See Customizing Node Upgrades for any desired <customized_node_upgrade_variables>.
    4. Wait for the GlusterFS pod to respawn and appear.

    5. oc rsh into the pod and verify all volumes are healed:

      $ oc rsh <GlusterFS_pod_name>
      $ for vol in `gluster volume list`; do gluster volume heal $vol info; done

      Ensure all of the volumes are healed and there are no outstanding tasks. The heal info command lists all pending entries for a given volume’s heal process. A volume is considered healed when Number of entries for that volume is 0.

    6. Remove the upgrade label and go to the next GlusterFS node:

      $ oc label node <node_name> type-

Special considerations when using gcePD

Because the default gcePD storage provider uses an RWO (Read-Write Only) access mode, you cannot perform a rolling upgrade on the registry or scale the registry to multiple pods. Therefore, when upgrading OKD, you must specify the following environment variables in your Ansible inventory file:

[OSEv3:vars]

openshift_hosted_registry_storage_provider=gcs
openshift_hosted_registry_storage_gcs_bucket=bucket01
openshift_hosted_registry_storage_gcs_keyfile=test.key
openshift_hosted_registry_storage_gcs_rootdirectory=/registry

Verifying the Upgrade

Ensure that the:

  • cluster is healthy,

  • services (master, node, and etcd) are running well,

  • the OKD, docker-registry, and router versions are correct,

  • the original applications are still available and the new application can be created, and

  • running oc adm diagnostics produces no errors.

To verify the upgrade:

  1. Check that all nodes are marked as Ready:

    # oc get nodes
    NAME                   STATUS    ROLES     AGE       VERSION
    master.example.com     Ready     master    7h        v1.9.1+a0ce1bc657
    node1.example.com      Ready     compute   7h        v1.9.1+a0ce1bc657
    node2.example.com      Ready     compute   7h        v1.9.1+a0ce1bc657
  2. Verify that you are running the expected versions of the docker-registry and router images, if deployed.

    # oc get -n default dc/docker-registry -o json | grep \"image\"
        "image": "openshift/origin-docker-registry:v1.0.6",
    # oc get -n default dc/router -o json | grep \"image\"
        "image": "openshift/origin-haproxy-router:v1.0.6",
  3. If you upgraded from Origin 1.0 to Origin 1.1, verify in your old /etc/sysconfig/openshift-master and /etc/sysconfig/openshift-node files that any custom configuration is added to your new /etc/sysconfig/origin-master and /etc/sysconfig/origin-node files.

  4. Use the diagnostics tool on the master to look for common issues:

    # oc adm diagnostics
    ...
    [Note] Summary of diagnostics execution:
    [Note] Completed with no errors or warnings seen.