# subscription-manager repos --disable="rhel-7-server-ose-3.0-rpms" \ --enable="rhel-7-server-ose-3.1-rpms" \ --enable="rhel-7-server-rpms"
Starting with OpenShift 3.0.2, if you installed using the advanced installation and the inventory file that was used is available, you can use the upgrade playbook to automate the OpenShift cluster upgrade process. If you installed using the quick installation method and a ~/.config/openshift/installer.cfg.yml file is available, you can use the installer to perform the automated upgrade.
The automated upgrade performs the following steps for you:
Applies the latest configuration.
upgrades and restart master services.
upgrades and restart node services.
Applies the latest cluster policies.
Updates the default router if one exists.
Updates the default registry if one exists.
Updates default image streams and InstantApp templates.
Running Ansible playbooks with the |
If you are upgrading from OpenShift Enterprise 3.0 to 3.1, on each master and node host you must manually disable the 3.0 channel and enable the 3.1 channel:
# subscription-manager repos --disable="rhel-7-server-ose-3.0-rpms" \ --enable="rhel-7-server-ose-3.1-rpms" \ --enable="rhel-7-server-rpms"
For any upgrade path, always ensure that you have the latest version of the atomic-openshift-utils package, which should also update the openshift-ansible-* packages:
# yum update atomic-openshift-utils
Install or update to the following latest available *-excluder packages on each RHEL 7 system, which helps ensure your systems stay on the correct versions of atomic-openshift and docker packages when you are not trying to upgrade, according to the OpenShift Enterprise version:
# yum install atomic-openshift-excluder atomic-openshift-docker-excluder
These packages add entries to the exclude
directive in the host’s
/etc/yum.conf file.
You must be logged in as a cluster administrative user on the master host for the upgrade to succeed:
$ oc login
There are two methods for running the automated upgrade: using the installer or running the upgrade playbook directly. Choose and follow one method.
If you installed OpenShift using the quick installation method, you should have an installation configuration file located at ~/.config/openshift/installer.cfg.yml. The installer requires this file to start an upgrade.
The installer currently only supports upgrading from OpenShift Enterprise 3.0 to 3.1. See Upgrading to OpenShift Enterprise 3.1 Asynchronous Releases for instructions on using Ansible directly. |
If you have an older format installation configuration file in ~/.config/openshift/installer.cfg.yml from an existing OpenShift Enterprise 3.0 installation, the installer will attempt to upgrade the file to the new supported format. If you do not have an installation configuration file of any format, you can create one manually.
To start the upgrade, run the installer with the upgrade
subcommand:
Satisfy the steps in Preparing for an Automated upgrade to ensure you are using the latest upgrade playbooks.
Run the following command on each host to remove the atomic-openshift packages from the list of yum excludes on the host:
# atomic-openshift-excluder unexclude
Run the installer with the upgrade
subcommand:
# atomic-openshift-installer upgrade
Follow the on-screen instructions to upgrade to the latest release.
After all master and node upgrades have completed, a recommendation will be printed to reboot all hosts. Before rebooting, run the following command on each master and node host to add the atomic-openshift packages back to the list of yum excludes on the host:
# atomic-openshift-excluder exclude
Then reboot all hosts.
After rebooting, continue to Updating Master and Node Certificates.
Alternatively, you can run the upgrade playbook with Ansible directly, similar to the advanced installation method, if you have an inventory file.
Before running the upgrade, first update your inventory file to change the
deployment_type
parameter from enterprise to openshift-enterprise; this
is required when upgrading from OpenShift Enterprise 3.0 to 3.1:
Before running the upgrade, first ensure the deployment_type
parameter in
your inventory file is set to openshift-enterprise
.
If you have multiple masters configured and want to enable rolling, full system
restarts of the hosts, you can set the openshift_rolling_restart_mode
parameter in your inventory file to system
. Otherwise, the default value
services performs rolling service restarts on HA masters, but does not reboot
the systems. See
Configuring
Cluster Variables for details.
Then, run the v3_0_to_v3_1 upgrade playbook. If your inventory file is
located somewhere other than the default /etc/ansible/hosts, add the -i
flag to specify the location. If you previously used the
atomic-openshift-installer
command to run your installation, you can check
~/.config/openshift/.ansible/hosts for the last inventory file that was
used, if needed.
# ansible-playbook [-i </path/to/inventory/file>] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_0_to_v3_1/upgrade.yml
When the upgrade finishes, a recommendation will be printed to reboot all hosts. After rebooting, continue to Updating Master and Node Certificates.
To apply asynchronous errata updates to an existing OpenShift Enterprise 3.1 cluster, first upgrade the atomic-openshift-utils package on the Red Hat Enterprise Linux 7 system where you will be running Ansible:
# yum update atomic-openshift-utils
Then, run the v3_1_minor upgrade playbook. If your inventory file is located
somewhere other than the default /etc/ansible/hosts, add the -i
flag to
specify the location. If you previously used the atomic-openshift-installer
command to run your installation, you can check
~/.config/openshift/.ansible/hosts for the last inventory file that was
used, if needed.
# ansible-playbook [-i </path/to/inventory/file>] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_1_minor/upgrade.yml
When the upgrade finishes, a recommendation will be printed to reboot all hosts. After rebooting, continue to Verifying the upgrade.
The following steps may be required for any OpenShift cluster that was originally installed prior to the OpenShift Enterprise 3.1 release. This may include any and all updates from that version.
With the 3.1 release, certificates for each of the kubelet nodes were updated to include the IP address of the node. Any node certificates generated before the 3.1 release may not contain the IP address of the node.
If a node is missing the IP address as part of its certificate, clients may
refuse to connect to the kubelet endpoint. Usually this will result in errors
regarding the certificate not containing an IP SAN
.
In order to remedy this situation, you may need to manually update the certificates for your node.
The following command can be used to determine which Subject Alternative Names (SANs) are present in the node’s serving certificate. In this example, the Subject Alternative Names are mynode, mynode.mydomain.com, and 1.2.3.4:
# openssl x509 -in /etc/origin/node/server.crt -text -noout | grep -A 1 "Subject Alternative Name" X509v3 Subject Alternative Name: DNS:mynode, DNS:mynode.mydomain.com, IP: 1.2.3.4
Ensure that the nodeIP
value set in the
/etc/origin/node/node-config.yaml file is present in the IP values from the
Subject Alternative Names listed in the node’s serving certificate. If the
nodeIP
is not present, then it will need to be added to the node’s
certificate.
If the nodeIP
value is already contained within the Subject Alternative
Names, then no further steps are required.
You will need to know the Subject Alternative Names and nodeIP
value for the
following steps.
If your current node certificate does not contain the proper IP address, then you must regenerate a new certificate for your node.
Node certificates will be regenerated on the master (or first master) and are then copied into place on node systems. |
Create a temporary directory in which to perform the following steps:
# mkdir /tmp/node_certificate_update # cd /tmp/node_certificate_update
Export the signing options:
# export signing_opts="--signer-cert=/etc/origin/master/ca.crt \ --signer-key=/etc/origin/master/ca.key \ --signer-serial=/etc/origin/master/ca.serial.txt"
Generate the new certificate:
# oadm ca create-server-cert --cert=server.crt \ --key=server.key $signing_opts \ --hostnames=<existing_SANs>,<nodeIP>
For example, if the Subject Alternative Names from before were mynode,
mynode.mydomain.com, and 1.2.3.4, and the nodeIP
was 10.10.10.1, then
you would need to run the following command:
# oadm ca create-server-cert --cert=server.crt \ --key=server.key $signing_opts \ --hostnames=mynode,mynode.mydomain.com,1.2.3.4,10.10.10.1
Back up the existing /etc/origin/node/server.crt and /etc/origin/node/server.key files for your node:
# mv /etc/origin/node/server.crt /etc/origin/node/server.crt.bak # mv /etc/origin/node/server.key /etc/origin/node/server.key.bak
You must now copy the new server.crt and server.key created in the temporary directory during the previous step:
# mv /tmp/node_certificate_update/server.crt /etc/origin/node/server.crt # mv /tmp/node_certificate_update/server.key /etc/origin/node/server.key
After you have replaced the node’s certificate, restart the node service:
# systemctl restart atomic-openshift-node
With the 3.1 release, certificates for each of the masters were updated to include all names that pods may use to communicate with masters. Any master certificates generated before the 3.1 release may not contain these additional service names.
The following command can be used to determine which Subject Alternative Names (SANs) are present in the master’s serving certificate. In this example, the Subject Alternative Names are mymaster, mymaster.mydomain.com, and 1.2.3.4:
# openssl x509 -in /etc/origin/master/master.server.crt -text -noout | grep -A 1 "Subject Alternative Name" X509v3 Subject Alternative Name: DNS:mymaster, DNS:mymaster.mydomain.com, IP: 1.2.3.4
Ensure that the following entries are present in the Subject Alternative Names for the master’s serving certificate:
Entry | Example |
---|---|
Kubernetes service IP address |
172.30.0.1 |
All master host names |
master1.example.com |
All master IP addresses |
192.168.122.1 |
Public master host name in clustered environments |
public-master.example.com |
kubernetes |
|
kubernetes.default |
|
kubernetes.default.svc |
|
kubernetes.default.svc.cluster.local |
|
openshift |
|
openshift.default |
|
openshift.default.svc |
|
openshift.default.svc.cluster.local |
If these names are already contained within the Subject Alternative Names, then no further steps are required.
If your current master certificate does not contain all names from the list above, then you must generate a new certificate for your master:
Back up the existing /etc/origin/master/master.server.crt and /etc/origin/master/master.server.key files for your master:
# mv /etc/origin/master/master.server.crt /etc/origin/master/master.server.crt.bak # mv /etc/origin/master/master.server.key /etc/origin/master/master.server.key.bak
Export the service names. These names will be used when generating the new certificate:
# export service_names="kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local"
You will need the first IP in the services
subnet (the kubernetes service IP) as well as the values of masterIP
,
masterURL
and publicMasterURL
contained in the
/etc/origin/master/master-config.yaml file for the following steps.
The kubernetes service IP can be obtained with:
# oc get svc/kubernetes --template='{{.spec.clusterIP}}'
Generate the new certificate:
# oadm ca create-master-certs \ --hostnames=<master_hostnames>,<master_IP_addresses>,<kubernetes_service_IP>,$service_names \ (1) (2) (3) --master=<internal_master_address> \ (4) --public-master=<public_master_address> \ (5) --cert-dir=/etc/origin/master/ \ --overwrite=false
1 | Adjust <master_hostnames> to match your master host name. In a clustered
environment, add all master host names. |
2 | Adjust <master_IP_addresses> to match the value of masterIP . In a
clustered environment, add all master IP addresses. |
3 | Adjust <kubernetes_service_IP> to the first IP in the kubernetes
services subnet. |
4 | Adjust <internal_master_address> to match the value of masterURL . |
5 | Adjust <public_master_address> to match the value of masterPublicURL . |
Restart master services. For single master deployments:
# systemctl restart atomic-openshift-master
For native HA multiple master deployments:
# systemctl restart atomic-openshift-master-api # systemctl restart atomic-openshift-master-controllers
For Pacemaker HA multiple master deployments:
# pcs resource restart master
After the service restarts, the certificate update is complete.
If you have previously deployed the EFK logging stack and want to upgrade to the latest logging component images, the steps must be performed manually as shown in Manual upgrades.
To verify the upgrade, first check that all nodes are marked as Ready:
# oc get nodes NAME LABELS STATUS master.example.com kubernetes.io/hostname=master.example.com,region=infra,zone=default Ready node1.example.com kubernetes.io/hostname=node1.example.com,region=primary,zone=east Ready
Then, verify that you are running the expected versions of the docker-registry and router images, if deployed:
# oc get -n default dc/docker-registry -o json | grep \"image\" "image": "openshift3/ose-docker-registry:v3.1.1.11", # oc get -n default dc/router -o json | grep \"image\" "image": "openshift3/ose-haproxy-router:v3.1.1.11",
If you upgraded from OSE 3.0 to OSE 3.1, verify in your old /etc/sysconfig/openshift-master and /etc/sysconfig/openshift-node files that any custom configuration is added to your new /etc/sysconfig/atomic-openshift-master and /etc/sysconfig/atomic-openshift-node files.
After upgrading, you can use the experimental diagnostics tool to look for common issues:
# openshift ex diagnostics ... [Note] Summary of diagnostics execution: [Note] Completed with no errors or warnings seen.