# cd ~/openshift-ansible # git pull https://github.com/openshift/openshift-ansible master # ansible-playbook [-i /path/to/hosts/file] playbooks/adhoc/upgrades/upgrade.yml
When new versions of OpenShift are released, you can upgrade your cluster to apply the latest enhancements and bug fixes. See the OpenShift Enterprise 3.0 Release Notes to review the latest changes.
Unless noted otherwise, node and masters within a major version are forward and backward compatible, so upgrading your cluster should go smoothly. However, you should not run mismatched versions longer than necessary to upgrade the entire cluster.
Starting with OpenShift 3.0.2, if you installed using the advanced installation and the inventory file that was used is available, you can use the upgrade playbook to automate the upgrade process. Alternatively, you can upgrade OpenShift manually.
This topic pertains to RPM-based installations only (i.e., the quick and advanced installation methods) and does not currently cover container-based installations. |
Starting with OpenShift 3.0.2, if you installed using the advanced installation and the inventory file that was used is available, you can use the upgrade playbook to automate the upgrade process. This playbook performs the following steps for you:
Applies the latest configuration by re-running the installation playbook.
Upgrades and restart master services.
Upgrades and restart node services.
Applies the latest cluster policies.
Updates the default router if one exists.
Updates the default registry if one exists.
Updates default image streams and InstantApp templates.
The upgrade playbook re-runs cluster configuration steps, therefore any settings that are not stored in your inventory file will be overwritten. The playbook creates a backup of any files that are changed, and you should carefully review the differences after the playbook finishes to ensure that your environment is configured as expected. |
Running Ansible playbooks with the |
Ensure that you have the latest openshift-ansible code checked out, then run the playbook
utilizing the default ansible-hosts file located in /etc/ansible/hosts. If
your hosts file is located somewhere else, add the -i
flag to specify the
location:
# cd ~/openshift-ansible # git pull https://github.com/openshift/openshift-ansible master # ansible-playbook [-i /path/to/hosts/file] playbooks/adhoc/upgrades/upgrade.yml
After the upgrade playbook finishes, verify that all nodes are marked as Ready and that you are running the expected versions of the docker-registry and router images:
# oc get nodes NAME LABELS STATUS master.example.com kubernetes.io/hostname=master.example.com,region=infra,zone=default Ready node1.example.com kubernetes.io/hostname=node1.example.com,region=primary,zone=east Ready # oc get -n default dc/router -o json | grep \"image\" "image": "openshift3/ose-haproxy-router:v3.0.2.0", # oc get -n default dc/docker-registry -ojson | grep \"image\" "image": "openshift3/ose-docker-registry:v3.0.2.0",
After upgrading, you can use the experimental diagnostics tool to look for common issues:
# openshift ex diagnostics ... [Note] Summary of diagnostics execution: [Note] Completed with no errors or warnings seen.
As an alternative to using the automated upgrade playbook, you can manually upgrade your OpenShift cluster. To manually upgrade without disruption, it is important to upgrade each component as documented in this topic. Before you begin your upgrade, familiarize yourself with the entire procedure. Specific releases may require additional steps to be performed at key points during the standard upgrade process.
Upgrade your masters first. On each master host, upgrade the openshift-master package:
# yum upgrade openshift-master
Then, restart the openshift-master service and review its logs to ensure services have been restarted successfully:
# systemctl restart openshift-master # journalctl -r -u openshift-master
After a cluster upgrade, the recommended default cluster roles may have been updated. To check if an update is recommended for your environment, you can run:
# oadm policy reconcile-cluster-roles
This command outputs a list of roles that are out of date and their new proposed values. For example:
# oadm policy reconcile-cluster-roles apiVersion: v1 items: - apiVersion: v1 kind: ClusterRole metadata: creationTimestamp: null name: admin rules: - attributeRestrictions: null resources: - builds/custom ...
Your output will vary based on the OpenShift version and any local customizations you have made. Review the proposed policy carefully. |
You can either modify this output to re-apply any local policy changes you have made, or you can automatically apply the new policy by running:
# oadm policy reconcile-cluster-roles --confirm
After upgrading your masters, you can upgrade your nodes. When restarting the openshift-node service, there will be a brief disruption of outbound network connectivity from running pods to services while the service proxy is restarted. The length of this disruption should be very short and scales based on the number of services in the entire cluster.
On each node host, upgrade all openshift packages:
# yum upgrade openshift\*
Then, restart the openshift-node service:
# systemctl restart openshift-node
As a user with cluster-admin privileges, verify that all nodes are showing as Ready:
# oc get nodes NAME LABELS STATUS master.example.com kubernetes.io/hostname=master.example.com Ready,SchedulingDisabled node1.example.com kubernetes.io/hostname=node1.example.com Ready node2.example.com kubernetes.io/hostname=node2.example.com Ready
If you have previously deployed a router, the router deployment configuration must be upgraded to apply updates contained in the router image. To upgrade your router without disrupting services, you must have previously deployed a highly-available routing service.
If you are upgrading to OpenShift Enterprise 3.0.1.0 or 3.0.2.0, first see the Additional Manual Instructions per Release section for important steps specific to your upgrade, then continue with the router upgrade as described in this section. |
Edit your router’s deployment configuration. For example, if it has the default router name:
# oc edit dc/router
Apply the following changes:
... spec: template: spec: containers: - env: ... image: registry.access.redhat.com/openshift3/ose-haproxy-router:v3.0.2.0 (1) imagePullPolicy: IfNotPresent ...
1 | Adjust the image version to match the version you are upgrading to. |
You should see one router pod updated and then the next.
The registry must also be upgraded for changes to take effect in the registry
image. If you have used a PersistentVolumeClaim
or a host mount point, you
may restart the registry without losing the contents of your registry. The
registry
installation topic details how to configure persistent storage.
Edit your registry’s deployment configuration:
# oc edit dc/docker-registry
Apply the following changes:
... spec: template: spec: containers: - env: ... image: registry.access.redhat.com/openshift3/ose-docker-registry:v3.0.2.0 (1) imagePullPolicy: IfNotPresent ...
1 | Adjust the image version to match the version you are upgrading to. |
Images that are being pushed or pulled from the internal registry at the time of upgrade will fail and should be restarted automatically. This will not disrupt pods that are already running. |
By default, the quick installation and advanced installation methods automatically create default image streams, QuickStart templates, and database service templates in the openshift project, which is a default project to which all users have view access. These objects were created during installation from the JSON files located under /usr/share/openshift/examples. Running the latest installer will copy newer files into place, but it does not currently update the openshift project.
You can update the openshift project by running the following commands. It is expected that you will receive warnings about items that already exist.
# oc create -n openshift -f /usr/share/openshift/examples/image-streams/image-streams-rhel7.json # oc create -n openshift -f /usr/share/openshift/examples/db-templates # oc create -n openshift -f /usr/share/openshift/examples/quickstart-templates # oc create -n openshift -f /usr/share/openshift/examples/xpaas-streams # oc create -n openshift -f /usr/share/openshift/examples/xpaas-templates # oc replace -n openshift -f /usr/share/openshift/examples/image-streams/image-streams-rhel7.json # oc replace -n openshift -f /usr/share/openshift/examples/db-templates # oc replace -n openshift -f /usr/share/openshift/examples/quickstart-templates # oc replace -n openshift -f /usr/share/openshift/examples/xpaas-streams # oc replace -n openshift -f /usr/share/openshift/examples/xpaas-templates
After updating the default image streams, you may also want to ensure that the images within those streams are updated. For each image stream in the default openshift project, you can run:
# oc import-image -n openshift <imagestream>
For example, get the list of all image streams in the default openshift project:
# oc get is -n openshift NAME DOCKER REPO TAGS UPDATED mongodb registry.access.redhat.com/openshift3/mongodb-24-rhel7 2.4,latest,v3.0.0.0 16 hours ago mysql registry.access.redhat.com/openshift3/mysql-55-rhel7 5.5,latest,v3.0.0.0 16 hours ago nodejs registry.access.redhat.com/openshift3/nodejs-010-rhel7 0.10,latest,v3.0.0.0 16 hours ago ...
Update each image stream one at a time:
# oc import-image -n openshift nodejs Waiting for the import to complete, CTRL+C to stop waiting. The import completed successfully. Name: nodejs Created: 16 hours ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2015-07-21T13:17:00Z Docker Pull Spec: registry.access.redhat.com/openshift3/nodejs-010-rhel7 Tag Spec Created PullSpec Image 0.10 latest 16 hours ago registry.access.redhat.com/openshift3/nodejs-010-rhel7:latest 66d92cebc0e48e4e4be3a93d0f9bd54f21af7928ceaa384d20800f6e6fcf669f latest 16 hours ago registry.access.redhat.com/openshift3/nodejs-010-rhel7:latest 66d92cebc0e48e4e4be3a93d0f9bd54f21af7928ceaa384d20800f6e6fcf669f v3.0.0.0 <pushed> 16 hours ago registry.access.redhat.com/openshift3/nodejs-010-rhel7:v3.0.0.0 66d92cebc0e48e4e4be3a93d0f9bd54f21af7928ceaa384d20800f6e6fcf669f
In order to update your s2i-based applications, you must manually trigger a new
build of those applications after importing the new images using |
Some OpenShift releases may have additional instructions specific to that release that must be performed to fully apply the updates across the cluster. Read through the following sections carefully depending on your upgrade path, as you may be required to perform certain steps and key points during the standard upgrade process described earlier in this topic.
See the OpenShift Enterprise 3.0 Release Notes to review the latest release notes.
The following steps are required for the OpenShift Enterprise 3.0.1.0 release.
Creating a Service Account for the Router
The default HAProxy router was updated to utilize host ports and requires that a service account be created and made a member of the privileged security context constraint (SCC). Additionally, "down-then-up" rolling upgrades have been added and is now the preferred strategy for upgrading routers.
After upgrading your master and nodes but before updating to the newer router, you must create a service account for the router. As a cluster administrator, ensure you are operating on the default project:
# oc project default
Delete any existing router service account and create a new one:
# oc delete serviceaccount/router serviceaccounts/router # echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"router"}}' | oc create -f - serviceaccounts/router
Edit the privileged SCC:
# oc edit scc privileged
Apply the following changes:
allowHostDirVolumePlugin: true allowHostNetwork: true (1) allowHostPorts: true (2) allowPrivilegedContainer: true ... users: - system:serviceaccount:openshift-infra:build-controller - system:serviceaccount:default:router (3)
1 | Add or update allowHostNetwork: true . |
2 | Add or update allowHostPorts: true . |
3 | Add the service account you created to the users list at the end of the
file. |
Edit your router’s deployment configuration:
# oc edit dc/router
Apply the following changes:
... spec: replicas: 2 selector: router: router strategy: resources: {} rollingParams: intervalSeconds: 1 timeoutSeconds: 120 updatePeriodSeconds: 1 updatePercent: -10 (1) type: Rolling ... template: ... spec: ... dnsPolicy: ClusterFirst restartPolicy: Always serviceAccount: router (2) serviceAccountName: router (3) ...
1 | Add updatePercent: -10 to allow down-then-up rolling upgrades. |
2 | Add serviceAccount: router to the template spec . |
3 | Add serviceAccountName: router to the template spec . |
Now upgrade your router per the standard router upgrade steps.
The following steps are required for the OpenShift Enterprise 3.0.2.0 release.
Switching the Router to Use the Host Network Stack
The default HAProxy router was updated to use the host networking stack by default instead of the former behavior of using the container network stack, which proxied traffic to the router, which in turn proxied the traffic to the target service and container. This new default behavior benefits performance because network traffic from remote clients no longer needs to take multiple hops through user space in order to reach the target service and container.
Additionally, the new default behavior enables the router to get the actual source IP address of the remote connection. This is useful for defining ingress rules based on the originating IP, supporting sticky sessions, and monitoring traffic, among other uses.
Existing router deployments will continue to use the container network stack unless modified to switch to using the host network stack.
To switch the router to use the host network stack, edit your router’s deployment configuration:
# oc edit dc/router
Apply the following changes:
... spec: replicas: 2 selector: router: router ... template: ... spec: ... ports: - containerPort: 80 (1) hostPort: 80 protocol: TCP - containerPort: 443 (1) hostPort: 443 protocol: TCP - containerPort: 1936 (1) hostPort: 1936 name: stats protocol: TCP resources: {} terminationMessagePath: /dev/termination-log dnsPolicy: ClusterFirst hostNetwork: true (2) restartPolicy: Always ...
1 | For host networking, ensure that the containerPort value matches the
hostPort values for each of the ports. |
2 | Add hostNetwork: true to the template spec . |
Now upgrade your router per the standard router upgrade steps.
Configuring serviceNetworkCIDR for the SDN
Add the serviceNetworkCIDR
parameter to the networkConfig
section in
/etc/openshift/master/master-config.yaml. This value should match the
servicesSubnet
value in the kubernetesMasterConfig
section:
kubernetesMasterConfig: servicesSubnet: 172.30.0.0/16 ... networkConfig: serviceNetworkCIDR: 172.30.0.0/16
Adding the Scheduler Configuration API Version
The scheduler configuration file incorrectly lacked kind
and apiVersion
fields when deployed using the quick or advanced installation methods. This will
affect future upgrades, so it is important to add those values if they do not
exist.
Modify the /etc/openshift/master/scheduler.json file to add the kind
and
apiVersion
fields:
{ "kind": "Policy", (1) "apiVersion": "v1", (2) "predicates": [ ... }
1 | Add "kind": "Policy", |
2 | Add "apiVersin": "v1", |