# yum update openshift-ansible
You can add new hosts to your cluster by running the scaleup.yml playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on only the new hosts. Before running the scaleup.yml playbook, complete all prerequisite host preparation steps.
The scaleup.yml playbook configures only the new host. It does not update NO_PROXY in master services, and it does not restart master services. |
You must have an existing inventory file, for example /etc/ansible/hosts,
that is representative of your current cluster configuration in order to run the
scaleup.yml playbook.
If you previously used the atomic-openshift-installer
command to run your
installation, you can check ~/.config/openshift/hosts for the last inventory
file that the installer generated and use that file as your inventory file. You
can modify this file as required. You must then specify the file location with
-i
when you run the ansible-playbook
.
See the cluster maximums section for the recommended maximum number of nodes. |
Ensure you have the latest playbooks by updating the openshift-ansible package:
# yum update openshift-ansible
Edit your /etc/ansible/hosts file and add new_<host_type> to the [OSEv3:children] section. For example, to add a new node host, add new_nodes:
[OSEv3:children] masters nodes new_nodes
To add new master hosts, add new_masters.
Create a [new_<host_type>] section to specify host information for the new hosts. Format this section like an existing section, as shown in the following example of adding a new node:
[nodes] master[1:3].example.com node1.example.com openshift_node_group_name='node-config-compute' node2.example.com openshift_node_group_name='node-config-compute' infra-node1.example.com openshift_node_group_name='node-config-infra' infra-node2.example.com openshift_node_group_name='node-config-infra' [new_nodes] node3.example.com openshift_node_group_name='node-config-infra'
See Configuring Host Variables for more options.
When adding new masters, add hosts to both the [new_masters] section and the [new_nodes] section to ensure that the new master host is part of the OpenShift SDN:
[masters] master[1:2].example.com [new_masters] master3.example.com [nodes] master[1:2].example.com node1.example.com openshift_node_group_name='node-config-compute' node2.example.com openshift_node_group_name='node-config-compute' infra-node1.example.com openshift_node_group_name='node-config-infra' infra-node2.example.com openshift_node_group_name='node-config-infra' [new_nodes] master3.example.com
If you label a master host with the |
Change to the playbook directory and run the openshift_node_group.yml
playbook. If your inventory file is located somewhere other than the default of
/etc/ansible/hosts, specify the location with the -i
option:
$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook [-i /path/to/file] \
playbooks/openshift-master/openshift_node_group.yml
This creates the configmap for the new node groups, and ultimately, the configuration file of the node on the host.
Running the openshift_node_group.yaml playbook only updates new nodes. It cannot be run to update existing nodes in a cluster. |
Run the scaleup.yml playbook.
If your inventory file is located somewhere
other than the default of /etc/ansible/hosts, specify the location with the
-i
option.
For additional nodes:
$ ansible-playbook [-i /path/to/file] \
playbooks/openshift-node/scaleup.yml
For additional masters:
$ ansible-playbook [-i /path/to/file] \
playbooks/openshift-master/scaleup.yml
Set the node label to logging-infra-fluentd=true
, if you deployed the EFK stack in your cluster:
# oc label node/new-node.example.com logging-infra-fluentd=true
After the playbook runs, verify the installation.
Move any hosts that you defined in the [new_<host_type>] section to their appropriate section. By moving these hosts, subsequent playbook runs that use this inventory file treat the nodes correctly. You can keep the empty [new_<host_type>] section. For example, when adding new nodes:
[nodes] master[1:3].example.com node1.example.com openshift_node_group_name='node-config-compute' node2.example.com openshift_node_group_name='node-config-compute' node3.example.com openshift_node_group_name='node-config-compute' infra-node1.example.com openshift_node_group_name='node-config-infra' infra-node2.example.com openshift_node_group_name='node-config-infra' [new_nodes]
You can add new etcd hosts to your cluster by running the etcd scaleup playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on the new hosts only. Before running the etcd scaleup.yml playbook, complete all prerequisite host preparation steps.
These steps will synchronize the settings in the Ansible inventory with the cluster. Ensure that any local changes are shown in the Ansible inventory. |
To add an etcd host to an existing cluster:
Ensure you have the latest playbooks by updating the openshift-ansible package:
# yum update openshift-ansible
Edit your /etc/ansible/hosts file, add new_<host_type> to the [OSEv3:children] group and add hosts under the new_<host_type> group. For example, to add a new etcd, add new_etcd:
[OSEv3:children] masters nodes etcd new_etcd [etcd] etcd1.example.com etcd2.example.com [new_etcd] etcd3.example.com
Change to the playbook directory and run the openshift_node_group.yml
playbook. If your inventory file is located somewhere other than the default of
/etc/ansible/hosts, specify the location with the -i
option:
$ cd /usr/share/ansible/openshift-ansible $ ansible-playbook [-i /path/to/file] \ playbooks/openshift-master/openshift_node_group.yml
This creates the configmap for the new node groups, and ultimately, the configuration file of the node on the host.
Running the openshift_node_group.yaml playbook only updates new nodes. It cannot be run to update existing nodes in a cluster. |
Run the etcd scaleup.yml playbook. If your inventory file is located somewhere other than the default of /etc/ansible/hosts, specify the location with the -i
option:
$ ansible-playbook [-i /path/to/file] \ playbooks/openshift-etcd/scaleup.yml
After the playbook completes successfully, verify the installation.
Follow these steps when you are migrating your machines to a different data center and the network and IPs assigned to it will change.
Back up the primary etcd and master nodes.
Ensure that you back up the /etc/etcd/ directory, as noted in the etcd backup instructions. |
Provision as many new machines as there are masters to replace.
Add or expand the cluster. For example, if you want to add 3 masters with etcd colocated, scale up 3 master nodes.
In the initial release of OpenShift Container Platform version 3.11, the scaleup.yml playbook does not scale up etcd. This is fixed in OpenShift Container Platform 3.11.59 and later. |
Add a master.
In step 3 of that process, add the
host of the new data center in [new_masters]
and [new_nodes]
, run the
openshift_node_group.yml playbook, and run the master scaleup.yml
playbook.
Put the same host in the etcd section, run the openshift_node_group.yml playbook, and run the etcd scaleup.yml playbook.
Verify that the host was added:
# oc get nodes
Verify that the master host IP was added:
# oc get ep kubernetes
Verify that etcd was added. The value of ETCDCTL_API
depends on the version
being used:
# source /etc/etcd/etcd.conf # ETCDCTL_API=2 etcdctl --cert-file=$ETCD_PEER_CERT_FILE --key-file=$ETCD_PEER_KEY_FILE \ --ca-file=/etc/etcd/ca.crt --endpoints=$ETCD_LISTEN_CLIENT_URLS member list
Copy /etc/origin/master/ca.serial.txt from the /etc/origin/master directory to the new master host that is listed first in your inventory file. By default, this is /etc/ansible/hosts.
Remove the etcd hosts.
Copy the /etc/etcd/ca directory to the new etcd host that is listed first in your inventory file. By default, this is /etc/ansible/hosts.
Remove the old etcd clients from the master-config.yaml file:
# grep etcdClientInfo -A 11 /etc/origin/master/master-config.yaml
Restart the masters:
# master-restart api # master-restart controllers
Remove the old etcd members from the cluster. The value of ETCDCTL_API
depends
on the version being used:
# source /etc/etcd/etcd.conf # ETCDCTL_API=2 etcdctl --cert-file=$ETCD_PEER_CERT_FILE --key-file=$ETCD_PEER_KEY_FILE \ --ca-file=/etc/etcd/ca.crt --endpoints=$ETCD_LISTEN_CLIENT_URLS member list
Take the IDs from the output of the command above and remove the old members using the IDs:
# etcdctl --cert-file=$ETCD_PEER_CERT_FILE --key-file=$ETCD_PEER_KEY_FILE \ --ca-file=/etc/etcd/ca.crt --endpoints=$ETCD_LISTEN_CLIENT_URL member remove 1609b5a3a078c227
Stop the etcd services on the old etcd hosts by removing the etcd pod definition:
# mkdir -p /etc/origin/node/pods-stopped # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
Shut down old master API and controller services by moving definition files out of the static pods dir /etc/origin/node/pods:
# mkdir -p /etc/origin/node/pods/disabled # mv /etc/origin/node/pods/controller.yaml /etc/origin/node/pods/disabled/:
Remove the master nodes from the HA proxy configuration, which was installed as a load balancer by default during the native installation process.
Decommission the machine.
Stop the node service on the master to be removed by removing the pod definition and rebooting the host:
# mkdir -p /etc/origin/node/pods-stopped # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/ # reboot
Delete the node resource:
# oc delete node
You can migrate nodes individually or in groups (of 2, 5, 10, and so on), depending on what you are comfortable with and how the services on the node are run and scaled.
For the migration node or nodes, provision new VMs for the node’s use in the new data center.
To add the new node, scale up the infrastructure. Ensure the labels for the new node are set properly and that your new API servers are added to your load balancer and successfully serving traffic.
Evaluate and scale down.
Mark the current node (in the old data center) unscheduled.
Evacuate the node, so that pods on it are scheduled to other nodes.
Verify that the evacuated services are running on the new nodes.
Remove the node.
Verify that the node is empty and does not have running processes.
Stop the service or delete the node.