$ oc adm migrate storage --include=* --loglevel=2 --confirm --config /etc/origin/master/admin.kubeconfig
If you installed using the advanced installation and the inventory file that was used is available, you can use the upgrade playbook to automate the OpenShift cluster upgrade process. If you installed using the quick installation method and a ~/.config/openshift/installer.cfg.yml file is available, you can use the quick installer to perform the automated upgrade.
The automated upgrade performs the following steps for you:
Applies the latest configuration.
Upgrades master and etcd components and restarts services.
Upgrades node components and restarts services.
Applies the latest cluster policies.
Updates the default router if one exists.
Updates the default registry if one exists.
Updates default image streams and InstantApp templates.
|
Running Ansible playbooks with the |
Before upgrading your cluster to OpenShift Container Platform 3.7, the cluster must be already upgraded to the latest asynchronous release of version 3.6. Cluster upgrades cannot span more than one minor version at a time, so if your cluster is at a version earlier than 3.6, you must first upgrade incrementally (e.g., 3.4 to 3.5, then 3.5 to 3.6). |
Before attempting the upgrade, follow the steps in Verifying the Upgrade to verify the cluster’s health. This will confirm that nodes are in the Ready state, running the expected starting version, and will ensure that there are no diagnostic errors or warnings. |
If you are completing a large-scale upgrade, which involves at least 10 worker nodes and thousands of projects and pods, review Special considerations for large-scale upgrades to prevent upgrade failures.
To prepare for an automated upgrade:
Pull the latest subscription data from RHSM:
# subscription-manager refresh
If you are upgrading from OpenShift Container Platform 3.6 to 3.7, manually disable the 3.6 channel and enable the 3.7 channel on each master and node host:
# subscription-manager repos --disable="rhel-7-server-ose-3.6-rpms" \ --enable="rhel-7-server-ose-3.7-rpms" \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-fast-datapath-rpms" # yum clean all
For any upgrade path, always ensure that you have the latest version of the atomic-openshift-utils package on each RHeL 7 system, which also updates the openshift-ansible-* packages:
# yum update atomic-openshift-utils
Before upgrading to OpenShift Container Platform 3.7, your cluster must use external etcd, not embedded etcd, and its data must use the etcd v3 data model:
Starting in OpenShift Container Platform 3.7, embedded etcd is no longer supported. If you have an OpenShift Container Platform 3.6 cluster that is using an embedded etcd, where etcd runs on your OpenShift Container Platform cluster, you must run a playbook to migrate it to external etcd. See Migrating embedded etcd to external etcd for steps.
If your cluster was initially installed using openshift-ansible version 3.6.173.0.21 or later, your etcd data is already using the v3 model. If it was upgraded from OpenShift Container Platform 3.5 to 3.6 before then, you must run a playbook to migrate the data from the v2 model to v3. See Migrating etcd Data (v2 to v3) for steps.
If you have applied manual configuration changes to your master or node configuration files since your last Ansible playbook run (whether that was initial installation or your most recent cluster upgrade), and you have not yet made the equivalent changes to your inventory file, review Configuring Ansible Inventory Files. For any variables that are relevant to the manual changes you made, apply the equivalent appropriate changes to your inventory files before running the upgrade. Otherwise, your manual changes may be overwritten by default values during the upgrade, which could cause pods to not run properly or other cluster stability issues.
In particular, if you made any changes to admissionConfig
settings in your
master configuration files, review the
openshift_master_admission_plugin_config
variable in
Configuring
Ansible Inventory Files. Failure to do so could cause pods to get stuck in
Pending
state if you had ClusterResourceOverride
settings manually
configured previously (as described in
Configuring Masters for Overcommitment).
After satisfying these steps, there are two methods for running the automated upgrade:
Choose and follow one of these methods.
If you installed OpenShift Container Platform using the quick installation method, you should have an installation configuration file located at ~/.config/openshift/installer.cfg.yml. The quick installer requires this file to start an upgrade.
The quick installer supports upgrading between minor versions of OpenShift Container Platform (one minor version at a time, e.g., 3.5 to 3.6) as well as between asynchronous errata updates within a minor version (e.g., 3.6.z).
If you have an older format installation configuration file in ~/.config/openshift/installer.cfg.yml from an installation of a previous cluster version, the quick installer will attempt to upgrade the file to the new supported format. If you do not have an installation configuration file of any format, you can create one manually.
To start an upgrade with the quick installer:
Satisfy the steps in Preparing for an Automated Upgrade to ensure you are using the latest upgrade playbooks.
Run the quick installer with the upgrade
subcommand:
# atomic-openshift-installer upgrade
Then, follow the on-screen instructions to upgrade to the latest release.
After all master and node upgrades have completed, a recommendation will be printed to reboot all hosts. After rebooting, if there are no additional features enabled, you can verify the upgrade. Otherwise, the next step depends on what additional features you have previously enabled.
Feature | Next Step |
---|---|
Service Catalog |
|
Aggregated Logging |
|
Cluster Metrics |
You can run automated upgrade playbooks using Ansible directly, similar to the
advanced installation method, if you have an inventory file. Playbooks can be
run using the ansible-playbook
command.
The same v3_7 upgrade playbooks can be used for either of the following scenarios:
Upgrading existing OpenShift Container Platform 3.6 clusters to 3.7
Upgrading existing OpenShift Container Platform 3.7 clusters to the latest asynchronous errata updates
An OpenShift Container Platform cluster can be upgraded in one or more phases. You can choose whether to upgrade all hosts in one phase by running a single Ansible playbook, or upgrade the control plane (master components) and nodes in multiple phases using separate playbooks.
Instructions on the full upgrade process and when to call these playbooks are described in Upgrading to the Latest OpenShift Container Platform 3.7 Release.
If your OpenShift Container Platform cluster uses GlusterFS pods, you must perform the upgrade in multiple phases. See Special Considerations When Using Containerized GlusterFS for details on how to upgrade with GlusterFS. |
When upgrading in separate phases, the control plane phase includes upgrading:
master components
node services running on masters
Docker running on masters
Docker running on any stand-alone etcd hosts
When upgrading only the nodes, the control plane must already be upgraded. The node phase includes upgrading:
node services running on stand-alone nodes
Docker running on stand-alone nodes
Nodes running master components are not included during the node upgrade phase, even though they have node services and Docker running on them. Instead, they are upgraded as part of the control plane upgrade phase. This ensures node services and Docker on masters are not upgraded twice (once during the control plane phase and again during the node phase). |
Whether upgrading in a single or multiple phases, you can customize how the node
portion of the upgrade progresses by passing certain Ansible variables to an
upgrade playbook using the -e
option.
Instructions on the full upgrade process and when to call these playbooks are described in Upgrading to the Latest OpenShift Container Platform 3.7 Release. |
The openshift_upgrade_nodes_serial
variable can be set to an integer or
percentage to control how many node hosts are upgraded at the same time. The
default is 1
, upgrading nodes one at a time.
For example, to upgrade 20 percent of the total number of detected nodes at a time:
$ ansible-playbook -i <path/to/inventory/file> \ </path/to/upgrade/playbook> \ -e openshift_upgrade_nodes_serial="20%"
The openshift_upgrade_nodes_label
variable allows you to specify that only
nodes with a certain label are upgraded. This can also be combined with the
openshift_upgrade_nodes_serial
variable.
For example, to only upgrade nodes in the group1 region, two at a time:
$ ansible-playbook -i <path/to/inventory/file> \ </path/to/upgrade/playbook> \ -e openshift_upgrade_nodes_serial="2" \ -e openshift_upgrade_nodes_label="region=group1"
See Managing Nodes for more on node labels.
When upgrading OpenShift Container Platform, you can execute custom tasks during specific operations through a system called hooks. Hooks allow cluster administrators to provide files defining tasks to execute before and/or after specific areas during installations and upgrades. This can be very helpful to validate or modify custom infrastructure when installing or upgrading OpenShift Container Platform.
It is important to remember that when a hook fails, the operation fails. This means a good hook can run multiple times and provide the same results. A great hook is idempotent.
Hooks have no defined or versioned interface. They can use internal openshift-ansible variables, but there is no guarantee these will remain in future releases. In the future, hooks may be versioned, giving you advance warning that your hook needs to be updated to work with the latest openshift-ansible.
Hooks have no error handling, so an error in a hook will halt the upgrade process. The problem will need to be addressed and the upgrade re-run.
Hooks are defined in the hosts inventory file under the OSev3:vars
section.
each hook must point to a YAML file which defines Ansible tasks. This file will be used as an include, meaning that the file cannot be a playbook, but a set of tasks. Best practice suggests using absolute paths to the hook file to avoid any ambiguity.
[OSev3:vars]
openshift_master_upgrade_pre_hook=/usr/share/custom/pre_master.yml
openshift_master_upgrade_hook=/usr/share/custom/master.yml
openshift_master_upgrade_post_hook=/usr/share/custom/post_master.yml
---
# Trivial example forcing an operator to ack the start of an upgrade
# file=/usr/share/custom/pre_master.yml
- name: note the start of a master upgrade
debug:
msg: "Master upgrade of {{ inventory_hostname }} is about to start"
- name: require an operator agree to start an upgrade
pause:
prompt: "Hit enter to start the master upgrade"
openshift_master_upgrade_pre_hook
Runs before each master is upgraded.
This hook runs against each master in serial.
If a task must run against a different host, said task must use
delegate_to
or local_action
.
openshift_master_upgrade_hook
Runs after each master is upgraded, but before its service or system restart.
This hook runs against each master in serial.
If a task must run against a different host, said task must use
delegate_to
or local_action
.
openshift_master_upgrade_post_hook
Runs after each master is upgraded and has had its service or system restart.
This hook runs against each master in serial.
If a task must run against a different host, said task must use
delegate_to
or local_action
.
To upgrade an existing OpenShift Container Platform 3.6 or 3.7 cluster to the latest 3.7 release:
Satisfy the steps in Preparing for an Automated Upgrade to ensure you are using the latest upgrade playbooks.
ensure that the steps on etcd v2 to v3 migration are satisfied, which is a special requirement for the OpenShift Container Platform 3.6 to 3.7 upgrade. |
ensure the openshift_deployment_type
parameter (formerly called
deployment_type
) in your inventory file is set to openshift-enterprise
.
If you want to enable rolling, full system
restarts of the hosts, you can set the openshift_rolling_restart_mode
parameter in your inventory file to system
. Otherwise, the default value
services
performs rolling service restarts on HA masters, but does not reboot
the systems. See
Configuring
Cluster Variables for details.
At this point, you can choose to run the upgrade in a single or multiple phases. See Upgrading the Control Plane and Nodes in Separate Phases for more details which components are upgraded in each phase.
If your inventory file is located somewhere other than the default
/etc/ansible/hosts, add the -i
flag to specify its location. If you
previously used the atomic-openshift-installer
command to run your
installation, you can check ~/.config/openshift/hosts for the last inventory
file that was used, if needed.
You can add |
Option A) Upgrade control plane and nodes in a single phase.
Run the upgrade.yml playbook to upgrade the cluster in a single phase using one playbook; the control plane is still upgraded first, then nodes in-place:
# ansible-playbook -i </path/to/inventory/file> \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade.yml
Option B) Upgrade the control plane and nodes in separate phases.
To upgrade only the control plane, run the upgrade_control_plane.yaml playbook:
# ansible-playbook -i </path/to/inventory/file> \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade_control_plane.yml
To upgrade only the nodes, run the upgrade_nodes.yaml playbook:
# ansible-playbook -i </path/to/inventory/file> \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_7/upgrade_nodes.yml \ [-e <customized_node_upgrade_variables>] (1)
1 | See Customizing Node Upgrades for any desired
<customized_node_upgrade_variables> . |
If you are upgrading the nodes in groups as described in Customizing Node Upgrades, continue invoking the upgrade_nodes.yml playbook until all nodes have been successfully upgraded.
After all master and node upgrades have completed, a recommendation will be printed to reboot all hosts. After rebooting, if there are no additional features enabled, you can verify the upgrade. Otherwise, the next step depends on what additional features you have previously enabled.
Feature | Next Step |
---|---|
Service Catalog |
|
Aggregated Logging |
|
Cluster Metrics |
Starting with OpenShift Container Platform 3.7, the service catalog, OpenShift Ansible broker, and template service broker are enabled and deployed by default for new cluster installations. However, they are not deployed by default during the upgrade from OpenShift Container Platform 3.6 to 3.7, so you must run an individual component playbook separate post-upgrade.
Upgrading from the OpenShift Container Platform 3.6 Technology Preview version of the service catalog and service brokers is not supported. |
To upgrade to these features:
See the following three sections in the Advanced Installation topic and update your inventory file accordingly:
Run the following playbook:
# ansible-playbook -i </path/to/inventory/file> \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/service-catalog.yml
To upgrade an existing eFK logging stack deployment, you must use the provided /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml Ansible playbook. This is the playbook to use if you were deploying logging for the first time on an existing cluster, but is also used to upgrade existing logging deployments.
If you have not already done so, see
Specifying Logging Ansible Variables in the
Aggregating Container Logs topic and update your Ansible inventory file to at least set the
following required variable within the [OSev3:vars]
section:
[OSev3:vars] openshift_logging_install_logging=true (1) openshift_logging_image_version=<tag> (2)
1 | enables the ability to upgrade the logging stack. |
2 | Replace <tag> with v3.7.119 for the latest version. |
Add any other openshift_logging_*
variables that you want to specify to
override the defaults, as described in
Specifying Logging Ansible Variables.
When you have finished updating your inventory file, follow the instructions in Deploying the eFK Stack to run the openshift-logging.yml playbook and complete the logging deployment upgrade.
If your Fluentd DeploymentConfig and DaemonSet for the eFK components are already set with: image: <image_name>:<vX.Y> imagePullPolicy: IfNotPresent The latest version <image_name> might not be pulled if there is already one with
the same <image_name:vX.Y> stored locally on the node where the pod is being
re-deployed. If so, manually change the DeploymentConfig and DaemonSet to
|
To upgrade an existing cluster metrics deployment, you must use the provided /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-metrics.yml Ansible playbook. This is the playbook to use if you were deploying metrics for the first time on an existing cluster, but is also used to upgrade existing metrics deployments.
If you have not already done so, see
Specifying Metrics Ansible Variables in the
enabling Cluster Metrics topic and update your Ansible inventory file to at least set the
following required variables within the [OSev3:vars]
section:
[OSev3:vars] openshift_metrics_install_metrics=true (1) openshift_metrics_image_version=<tag> (2) openshift_metrics_hawkular_hostname=<fqdn> (3) openshift_metrics_cassandra_storage_type=(emptydir|pv|dynamic) (4)
1 | enables the ability to upgrade the metrics deployment. |
2 | Replace <tag> with v3.7.119 for the latest version. |
3 | Used for the Hawkular Metrics route. Should correspond to a fully qualified domain name. |
4 | Choose a type that is consistent with the previous deployment. |
Add any other openshift_metrics_*
variables that you want to specify to
override the defaults, as described in
Specifying
Metrics Ansible Variables.
When you have finished updating your inventory file, follow the instructions in Deploying the Metrics Deployment to run the openshift_metrics.yml playbook and complete the metrics deployment upgrade.
For large-scale cluster upgrades, which involve at least 10 worker nodes and thousands of projects and pods, the API object storage migration should be performed prior to running the upgrade playbooks, and then again after the upgrade has successfully completed. Otherwise, the upgrade process will fail.
Refer to the Running the pre- and post- API server model object migration outside of the upgrade window section of the Recommendations for large-scale OpenShift upgrades for further guidance.
Mixed environment upgrades (for example, those with Red Hat enterprise Linux and
Red Hat enterprise Linux Atomic Host) require setting both
openshift_pkg_version
and openshift_image_tag
. In mixed environments, if
you only specify openshift_pkg_version
, then that number is used for the
packages for Red Hat enterprise Linux and and the image for Red Hat enterprise
Linux Atomic Host.
When upgrading OpenShift Container Platform, you must upgrade the set of nodes where GlusterFS pods are running.
Special consideration must be taken when upgrading these nodes, as drain
and
unschedule
will not terminate and evacuate the GlusterFS pods because they are
running as part of a daemonset.
There is also the potential for someone to run an upgrade on multiple nodes at the same time, which would lead to data availability issues if more than one was hosting GlusterFS pods.
even if a serial upgrade is running, there is no guarantee sufficient time will be given for GlusterFS to complete all of its healing operations before GlusterFS on the next node is terminated. This could leave the cluster in a bad or unknown state. Therefore, the following procedure is recommended.
Upgrade the control plane (the master nodes and etcd nodes).
Upgrade standard infra
nodes (router, registry, logging, and metrics).
If any of the nodes in those groups are running GlusterFS, perform step 4 of
this procedure at the same time. GlusterFS nodes must be upgraded along with
other nodes in their class ( |
Upgrade standard nodes running application containers.
If any of the nodes in those groups are running GlusterFS, perform step 4 of
this procedure at the same time. GlusterFS nodes must be upgraded along with
other nodes in their class ( |
Upgrade the OpenShift Container Platform nodes running GlusterFS one at a time.
Run oc get daemonset
to verify the label found under NODe-SeLeCTOR
. The
default value is storagenode=glusterfs
.
Remove the daemonset label from the node:
$ oc label node <node_name> <daemonset_label>-
This will cause the GlusterFS pod to terminate on that node.
Add an additional label (for example, type=upgrade
) to the node you want to upgrade.
To run the upgrade playbook on the single node where you terminated GlusterFS,
use -e openshift_upgrade_nodes_label="type=upgrade"
.
When the upgrade completes, relabel the node with the daemonset selector:
$ oc label node <node_name> <daemonset_label>
Wait for the GlusterFS pod to respawn and appear.
oc rsh
into the pod and verify all volumes are healed:
$ oc rsh <GlusterFS_pod_name> $ for vol in `gluster volume list`; do gluster volume heal $vol info; done
ensure all of the volumes are healed and there are no outstanding tasks. The
heal info
command lists all pending entries for a given volume’s heal process.
A volume is considered healed when Number of entries
for that volume is 0
.
Remove the upgrade label (for example, type=upgrade
) and go to the next
GlusterFS node.
Because the default gcePD storage provider uses an RWO (Read-Write Only) access mode, you cannot perform a rolling upgrade on the registry or scale the registry to multiple pods. Therefore, when upgrading OpenShift Container Platform, you must specify the following environment variables in your Ansible inventory file:
[OSev3:vars] openshift_hosted_registry_storage_provider=gcs openshift_hosted_registry_storage_gcs_bucket=bucket01 openshift_hosted_registry_storage_gcs_keyfile=test.key openshift_hosted_registry_storage_gcs_rootdirectory=/registry
To verify the upgrade:
Check that all nodes are marked as Ready:
# oc get nodes NAMe STATUS AGe master.example.com Ready,SchedulingDisabled 165d node1.example.com Ready 165d node2.example.com Ready 165d
Verify that you are running the expected versions of the docker-registry
and router images, if deployed.
Replace <tag>
with v3.7.119
for the latest version.
# oc get -n default dc/docker-registry -o json | grep \"image\" "image": "openshift3/ose-docker-registry:<tag>", # oc get -n default dc/router -o json | grep \"image\" "image": "openshift3/ose-haproxy-router:<tag>",
Use the diagnostics tool on the master to look for common issues:
# oc adm diagnostics ... [Note] Summary of diagnostics execution: [Note] Completed with no errors or warnings seen.