This is a cache of https://docs.openshift.com/enterprise/3.1/install_config/upgrading/pacemaker_to_native_ha.html. It is a snapshot of the page at 2024-11-30T04:33:16.590+0000.
Pacemaker to Native HA - Upgrading | Installation and Configuration | OpenShift Enterprise 3.1
×

Overview

If you are using the Pacemaker method for high availability (HA) masters, you can upgrade to the native HA method either using Ansible playbooks or manually. Both methods are described in the following sections.

Using Ansible Playbooks

These steps assume that cluster has been upgraded to OpenShift Enterprise 3.1 using either the manual or automated method.

Playbooks used for the Pacemaker to native HA upgrade will re-run cluster configuration steps, therefore any settings that are not stored in your inventory file will be overwritten. Back up any configuration files that have been modified since installation before beginning this upgrade.

Modifying the Ansible Inventory

Your original Ansible inventory file’s Pacemaker configuration contains a VIP and a cluster host name which should resolve to this VIP. Native HA requires a cluster host name which resolves to the load balancer being used.

Consider the following example configuration:

# Pacemaker high availability cluster method.
# Pacemaker HA environment must be able to self provision the
# configured VIP. For installation openshift_master_cluster_hostname
# must resolve to the configured VIP.
#openshift_master_cluster_method=pacemaker
#openshift_master_cluster_password=openshift_cluster
#openshift_master_cluster_vip=192.168.133.35
#openshift_master_cluster_public_vip=192.168.133.35
#openshift_master_cluster_hostname=openshift-cluster.example.com
#openshift_master_cluster_public_hostname=openshift-cluster.example.com

Remove or comment the above section in your inventory file. Then, add the following section, modifying the host names to match your current cluster host names:

# Native high availability cluster method with optional load balancer.
# If no lb group is defined, the installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-cluster.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

Native HA requires a load balancer to balance the master API (port 8443). When modifying your inventory file, specify an [lb] group and add lb to the [OSEv3:children] section if you would like the playbooks to configure an HAProxy instance as the load balancer. This instance must be on a separate host from the masters and nodes:

[OSEv3:children]
masters
nodes
lb
...
[lb]
lb1.example.com (1)
1 Host name of the HAProxy load balancer.

Any external load balancer may be used in place of the default HAProxy host, but it must be pre-configured and allow API traffic to masters on port 8443.

Destroying the Pacemaker Cluster

On any master, run the following to destroy the Pacemaker cluster.

After the Pacemaker cluster has been destroyed, the OpenShift cluster will be in outage until the remaining steps are completed.

# pcs cluster destroy --all

Updating DNS

Modify your cluster host name DNS entries such that the host name used resolves to the load balancer that will be used with native HA.

In the earlier example configuration, openshift-cluster.example.com resolves to 192.168.133.35. DNS must be modified such that openshift-cluster.example.com now resolves to the load balancer host or to the master API balancer in an alternative load balancing solution.

Running the Ansible Playbook

You can now run the following Ansible playbook:

Back up any configuration files that have been modified since installation before beginning this upgrade.

# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

After the playbook finishes successfully, your upgrade to the native HA method is complete. Restore any configuration files if you backed up any that had been modified since installation, and restart any relevant OpenShift services, if necessary.

Manually Upgrading

These steps assume that cluster has been upgraded to OpenShift Enterprise 3.1 using either the manual or automated method. They also assume that you are bringing your own load balancer which has been configured to balance the master API on port 8443.

Creating Unit and System Configuration for New services

The Systemd unit files for the atomic-openshift-master-api and atomic-openshift-master-controllers services are not yet provided by packaging. Ansible creates unit and system configuration when installing with the native HA method.

Therefore, create the following files:

Example 1. /usr/lib/systemd/system/atomic-openshift-master-api.service File
[Unit]
Description=Atomic OpenShift Master API
Documentation=https://github.com/openshift/origin
After=network.target
After=etcd.service
Before=atomic-openshift-node.service
Requires=network.target

[service]
Type=notify
EnvironmentFile=/etc/sysconfig/atomic-openshift-master-api
Environment=GOTRACEBACK=crash
ExecStart=/usr/bin/openshift start master api --config=${CONFIG_FILE} $OPTIONS
LimitNOFILE=131072
LimitCORE=infinity
WorkingDirectory=/var/lib/origin/
SyslogIdentifier=atomic-openshift-master-api

[Install]
WantedBy=multi-user.target
WantedBy=atomic-openshift-node.service
Example 2. /usr/lib/systemd/system/atomic-openshift-master-controllers.service File
[Unit]
Description=Atomic OpenShift Master Controllers
Documentation=https://github.com/openshift/origin
After=network.target
After=atomic-openshift-master-api.service
Before=atomic-openshift-node.service
Requires=network.target

[service]
Type=notify
EnvironmentFile=/etc/sysconfig/atomic-openshift-master-controllers
Environment=GOTRACEBACK=crash
ExecStart=/usr/bin/openshift start master controllers --config=${CONFIG_FILE} $OPTIONS
LimitNOFILE=131072
LimitCORE=infinity
WorkingDirectory=/var/lib/origin/
SyslogIdentifier=atomic-openshift-master-controllers
Restart=on-failure

[Install]
WantedBy=multi-user.target
WantedBy=atomic-openshift-node.service
Example 3. /etc/sysconfig/atomic-openshift-master-api File
OPTIONS=--loglevel=2
CONFIG_FILE=/etc/origin/master/master-config.yaml

# Proxy configuration
# Origin uses standard HTTP_PROXY environment variables. Be sure to set
# NO_PROXY for your master
#NO_PROXY=master.example.com
#HTTP_PROXY=http://USER:PASSWORD@IPADDR:PORT
#HTTPS_PROXY=https://USER:PASSWORD@IPADDR:PORT
Example 4. /etc/sysconfig/atomic-openshift-master-controllers File
OPTIONS=--loglevel=2
CONFIG_FILE=/etc/origin/master/master-config.yaml

# Proxy configuration
# Origin uses standard HTTP_PROXY environment variables. Be sure to set
# NO_PROXY for your master
#NO_PROXY=master.example.com
#HTTP_PROXY=http://USER:PASSWORD@IPADDR:PORT
#HTTPS_PROXY=https://USER:PASSWORD@IPADDR:PORT

Then, reload Systemd to pick up your changes:

# systemctl daemon-reload

Destroying the Pacemaker Cluster

On any master, run the following to destroy the Pacemaker cluster.

After the Pacemaker cluster has been destroyed, the OpenShift cluster will be in outage until the remaining steps are completed.

# pcs cluster destroy --all

Updating DNS

Modify your cluster host name DNS entries such that the host name used resolves to the load balancer that will be used with native HA.

For example, if the cluster host name is openshift-cluster.example.com and it resolved to a VIP of 192.168.133.35, then DNS must be modified such that openshift-cluster.example.com now resolves to the master API balancer.

Modifying Master and Node Configuration

Edit the master configuration in the /etc/origin/master/master-config.yaml file and ensure that kubernetesMasterConfig.masterCount is updated to the total number of masters. Perform this step on all masters:

Example 5. /etc/origin/master/master-config.yaml File
...
kubernetesMasterConfig:
  apiServerArguments:
    controllerArguments:
      masterCount: 3 (1)
...
1 Update this value to the total number of masters.

Edit the node configuration in the /etc/orign/node/node-config.yaml file and remove the dnsIP setting. OpenShift will use the Kubernetes service IP as the dnsIP by default. Perform this step on all nodes:

Example 6. /etc/origin/node/node-config.yaml File
...
allowDisabledDocker: false
apiVersion: v1
dnsDomain: cluster.local
dnsIP: 10.6.102.3 (1)
dockerConfig:
  execHandlerName: ""
...
1 Remove this line.

Starting the API service

Start and enable the API service on all masters:

# systemctl start atomic-openshift-master-api
# systemctl enable atomic-openshift-master-api

Starting the Controller service

Start and enable the controllers service on all masters:

# systemctl start atomic-openshift-master-controllers
# systemctl enable atomic-openshift-master-controllers

After the service restarts, your upgrade to the native HA method is complete.

Modifying the Ansible Inventory

Optionally, modify your Ansible inventory file for future runs per the instructions above in the playbooks method.