[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
For production environments, a reference configuration implemented using Ansible playbooks is available as the advanced installation method for installing OpenShift hosts. Familiarity with Ansible is assumed, however you can use this configuration as a reference to create your own implementation using the configuration management tool of your choosing.
While RHEL Atomic Host is supported for running containerized OpenShift services, the advanced installation method utilizes Ansible, which is not available in RHEL Atomic Host, and must therefore be run from a RHEL 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift cluster, but it can be.
Alternatively, you can use the quick installation method if you prefer an interactive installation experience.
Running Ansible playbooks with the |
Before installing OpenShift, you must first see the Prerequisites topic to prepare your hosts, which includes verifying system and environment requirements per component type and properly installing and configuring Docker. It also includes installing Ansible version 1.8.4 or later, as the advanced installation method is based on Ansible playbooks and as such requires directly invoking Ansible.
If you are interested in installing OpenShift using the containerized method (optional for RHEL but required for RHEL Atomic Host), see RPM vs Containerized to ensure that you understand the differences between these methods, then return to this topic to continue.
After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue in this topic to Configuring Ansible.
The /etc/ansible/hosts file is Ansible’s inventory file for the playbook to use during the installation. The inventory file describes the configuration for your OpenShift cluster. You must replace the default contents of the file with your desired configuration.
The following sections describe commonly-used variables to set in your inventory file during an advanced installation, followed by example inventory files you can use as a starting point for your installation. The examples describe various environment topographies, including using multiple masters for high availability. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the advanced installation.
To assign environment variables to hosts during the Ansible installation, indicate the desired variables in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:
[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:
Variable | Purpose |
---|---|
|
This variable overrides the internal cluster host name for the system. Use this when the system’s default IP address does not resolve to the system host name. |
|
This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
This variable overrides the cluster internal IP address for the system. Use this when using an interface that is not configured with the default route. |
|
This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
If set to true, containerized OpenShift services are run on target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See RPM vs Containerized for more details. Containerized installations are supported starting in OSE 3.1.1. |
|
This variable adds labels to nodes during installation. See Configuring Node Host Labels for more details. |
|
This variable is used to configure |
|
This variable configures additional Docker options within /etc/sysconfig/docker, such as options used in Managing Docker Container Logs. Example usage: "--log-driver json-file --log-opt max-size=1M --log-opt max-file=3". |
To assign environment variables during the Ansible install that apply more globally to your OpenShift cluster overall, indicate the desired variables in the /etc/ansible/hosts file on separate, single lines within the [OSEv3:vars] section. For example:
[OSEv3:vars] openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] osm_default_subdomain=apps.test.example.com
The following table describes variables for use with the Ansible installer that can be assigned cluster-wide:
Variable | Purpose |
---|---|
|
This variable sets the SSH user for the installer to use and defaults to root. This user should allow SSH-based authentication without requiring a password. If using SSH key-based authentication, then the key should be managed by an SSH agent. |
|
If |
|
If set to true, containerized OpenShift services are run on all target master and node hosts in the cluster instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See RPM vs Containerized for more details. Containerized installations are supported starting in OSE 3.1.1. |
|
This variable overrides the host name for the cluster, which defaults to the host name of the master. |
|
This variable overrides the public host name for the cluster, which defaults to the host name of the master. If you use an external load balancer, specify the address of the external load balancer. For example: ---- openshift_master_cluster_public_hostname=openshift-ansible.public.example.com ---- |
|
Optional. This variable defines the HA method when deploying multiple masters.
Can be either |
|
These variables are only required when using the For |
|
|
|
|
|
This variable enables rolling restarts of HA masters (i.e., masters are taken
down one at a time) when
running
the upgrade playbook directly. It defaults to |
|
This variable configures which OpenShift SDN plug-in to use for the pod network, which defaults to redhat/openshift-ovs-subnet for the standard SDN plug-in. Set the variable to redhat/openshift-ovs-multitenant to use the multitenant plug-in. |
|
This variable overrides the identity provider, which defaults to Deny All. |
|
These variables are used to configure custom certificates which are deployed as part of the installation. See Configuring Custom Certificates for more information. |
|
|
|
These variables override defaults for session options in the OAuth configuration. See Configuring Session Options for more information. |
|
|
|
|
|
|
|
This variable configures the subnet in which services will be created within the OpenShift Enterprise SDN. This network block should be private and must not conflict with any existing network blocks in your infrastructure to which pods, nodes, or the master may require access to, or the installation will fail. Defaults to 172.30.0.0/16, and cannot be re-configured after deployment. If changing from the default, avoid 172.16.0.0/16, which the docker0 network bridge uses by default, or modify the docker0 network. |
|
Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details. |
|
Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details. |
|
This variable overrides the default subdomain to use for exposed routes. |
|
This variable overrides the node selector that projects will use by default when placing pods. |
|
This variable overrides the SDN cluster network CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and must not conflict with existing network blocks in your infrastructure to which pods, nodes, or the master may require access. Defaults to 10.1.0.0/16 and cannot be arbitrarily re-configured after deployment, although certain changes to it can be made in the SDN master configuration. |
|
This variable specifies the size of the per host subnet allocated for pod IPs by OpenShift SDN. Defaults to 8 which means that a subnet of size /24 is allocated to each host; for example, given the default 10.1.0.0/16 cluster network, this will allocate 10.1.0.0/24, 10.1.1.0/24, 10.1.2.0/24, and so on. This cannot be re-configured after deployment. |
You can assign labels to node hosts during the Ansible install by configuring the /etc/ansible/hosts file. Labels are useful for determining the placement of pods onto nodes using the scheduler. Other than region=infra (discussed below), the actual label names and values are arbitrary and can be assigned however you see fit per your cluster’s requirements.
To assign labels to a node host during an Ansible install, use the
openshift_node_labels
variable with the desired labels added to the desired
node host entry in the [nodes] section. In the following example, labels are
set for a region called primary and a zone called east:
[nodes] node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
The openshift_router_selector
and openshift_registry_selector
Ansible
settings are set to region=infra by default:
# default selectors for router and registry services # openshift_router_selector='region=infra' # openshift_registry_selector='region=infra'
The default router and registry will be automatically deployed if nodes exist that match the selector settings above. For example:
[nodes] node1.example.com openshift_node_labels="{'region':'infra','zone':'default'}"
Any hosts you designate as masters during the installation process should also be configured as nodes by adding them to the [nodes] section so that the masters are configured as part of the OpenShift SDN.
However, in order to ensure that your masters are not burdened with running
pods, you can make them
unschedulable
by adding the openshift_scheduleable=false
option any node that is also a
master. For example:
[nodes] master.example.com openshift_node_labels="{'region':'infra','zone':'default'}" openshift_schedulable=false
Session
options in the OAuth configuration are configurable in the inventory file. By
default, Ansible populates a sessionSecretsFile
with generated
authentication and encryption secrets so that sessions generated by one master
can be decoded by the others. The default location is
/etc/origin/master/session-secrets.yaml, and this file will only be
re-created if deleted on all masters.
You can set the session name and maximum number of seconds with
openshift_master_session_name
and openshift_master_session_max_seconds
:
openshift_master_session_name=ssn openshift_master_session_max_seconds=3600
If provided, openshift_master_session_auth_secrets
and
openshift_master_encryption_secrets
must be equal length.
For openshift_master_session_auth_secrets
, used to authenticate sessions
using HMAC, it is recommended to use secrets with 32 or 64 bytes:
openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
For openshift_master_encryption_secrets
, used to encrypt sessions, secrets
must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256:
openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
Custom serving certificates for the public host names of the OpenShift API and web console can be deployed during an advanced installation and are configurable in the inventory file.
Custom certificates should only be configured for the host name associated with
the |
Certificate and key file paths can be configured using the
openshift_master_named_certificates
cluster variable:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key"}]
File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed within the /etc/origin/master/named_certificates/ directory.
Ansible detects a certificate’s Common Name
and Subject Alternative Names
.
Detected names can be overridden by providing the "names"
key when setting
openshift_master_named_certificates
:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"]}]
Certificates configured using openshift_master_named_certificates
are cached
on masters, meaning that each additional Ansible run with a different set of
certificates results in all previously deployed certificates remaining in place
on master hosts and within the master configuration file.
If you would like openshift_master_named_certificates
to be overwritten with
the provided value (or no value), specify the
openshift_master_overwrite_named_certificates
cluster variable:
openshift_master_overwrite_named_certificates=true
For a more complete example, consider the following cluster variables in an inventory file:
openshift_master_cluster_method=native openshift_master_cluster_hostname=lb.openshift.com openshift_master_cluster_public_hostname=custom.openshift.com
To overwrite the certificates on a subsequent Ansible run, you could set the following:
openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key"}, "names": ["custom.openshift.com"]}] openshift_master_overwrite_named_certificates=true
You can configure an environment with a single master and multiple nodes, and either a single embedded etcd or multiple external etcd hosts.
Moving from a single master cluster to multiple masters after installation is not supported. |
Single master and Multiple Nodes
The following table describes an example environment for a single master (with embedded etcd) and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com |
master and node |
node1.example.com |
Node |
node2.example.com |
You can see these example hosts present in the [masters] and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters and nodes groups [OSEv3:children] masters nodes # Set variables common for all OSEv3 hosts [OSEv3:vars] # SSH user, this user should allow ssh based auth without requiring a password ansible_ssh_user=root # If ansible_ssh_user is not root, ansible_sudo must be set to true #ansible_sudo=true deployment_type=openshift-enterprise # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host group for nodes, includes region info [nodes] master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Single master, Multiple etcd, and Multiple Nodes
The following table describes an example environment for a single master, three etcd hosts, and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com |
master and node |
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift’s embedded etcd is not supported. |
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root deployment_type=openshift-enterprise # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
You can configure an environment with multiple masters, multiple etcd hosts, and multiple nodes. Configuring multiple masters for high availability (HA) ensures that the cluster has no single point of failure.
Moving from a single master cluster to multiple masters after installation is not supported. |
When configuring multiple masters, the advanced installation supports two high availability (HA) methods:
|
Leverages the native HA master capabilities built into OpenShift and can be combined with any load balancing solution. If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed you have pre-configured a load balancing solution of your choice to balance the master API (port 8443) on all master hosts. |
|
Configures Pacemaker as the load balancer for multiple masters. Requires a High Availability Add-on for Red Hat Enterprise Linux subscription, which is provided separately from the OpenShift Enterprise subscription. |
For more on these methods and the high availability master architecture, see Kubernetes Infrastructure. |
To configure multiple masters, choose one of the above HA methods, and refer to the relevant example section that follows.
Multiple masters Using Native HA
The following describes an example environment for three
masters,
one HAProxy load balancer, three
etcd
hosts, and two
nodes
using the native
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com |
master (clustered using native HA) and node |
master2.example.com |
|
master3.example.com |
|
lb.example.com |
HAProxy to load balance API master endpoints |
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift’s embedded etcd is not supported. |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root deployment_type=openshift-enterprise # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Native high availbility cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-cluster.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # override the default controller lease ttl #osm_controller_lease_ttl=30 # enable ntp on masters to ensure proper failover openshift_clock_enabled=true # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Note the following when using the native
HA method:
The advanced installation method does not currently support multiple HAProxy
load balancers in an active-passive setup. See the
Load
Balancer Administration documentation for post-installation amendments, or
use the pacemaker
method if you require this capability.
In a HAProxy setup, controller manager servers run as standalone processes.
They elect their active leader with a lease stored in etcd. The lease
expires after 30 seconds by default. If a failure happens on an active
controller server, it will take up to this number of seconds to elect another
leader. The interval can be configured with the osm_controller_lease_ttl
variable.
Multiple masters Using Pacemaker
The following describes an example environment for three
masters,
three
etcd
hosts, and two
nodes
using the pacemaker
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com |
master (clustered using Pacemaker) and node |
master2.example.com |
|
master3.example.com |
|
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift’s embedded etcd is not supported. |
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root deployment_type=openshift-enterprise # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Pacemaker high availability cluster method. # Pacemaker HA environment must be able to self provision the # configured VIP. For installation openshift_master_cluster_hostname # must resolve to the configured VIP. openshift_master_cluster_method=pacemaker openshift_master_cluster_password=openshift_cluster openshift_master_cluster_vip=192.168.133.25 openshift_master_cluster_public_vip=192.168.133.25 openshift_master_cluster_hostname=openshift-cluster.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # override the default controller lease ttl #osm_controller_lease_ttl=30 # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Note the following when using this configuration:
Installing multiple masters with Pacemaker requires that you configure a fencing device after running the installer.
When specifying multiple masters, the installer handles creating and starting
the HA cluster. If during that process the pcs status
command indicates that
an HA cluster already exists, the installer skips HA cluster configuration.
After you have configured Ansible by defining an inventory file in /etc/ansible/hosts, you can run the advanced installation using the following playbook:
# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
The installer caches playbook configuration values for 10 minutes, by default. If you change any system, network, or inventory configuration,
and then re-run the installer within that 10 minute period, the new values are not used, and the previous values are used instead.
You can delete the contents of the cache, which is defined
by the |
Due to a known issue, after running the installation, if NFS volumes are provisioned for any component, the following directories might be created whether their components are being deployed to NFS volumes or not:
You can delete these directories after installation, as needed. |
If you installed OpenShift using a configuration for multiple masters with Pacemaker as a load balancer, you must configure a fencing device. See Fencing: Configuring STONITH in the High Availability Add-on for Red Hat Enterprise Linux documentation for instructions, then continue to Verifying the Installation.
After the installation completes:
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:
# oc get nodes NAME LABELS STATUS master.example.com kubernetes.io/hostname=master.example.com,region=infra,zone=default Ready,SchedulingDisabled node1.example.com kubernetes.io/hostname=node1.example.com,region=primary,zone=east Ready node2.example.com kubernetes.io/hostname=node2.example.com,region=primary,zone=west Ready
To verify that the web console is installed correctly, use the master host name and the console port number to access the console with a web browser.
For example, for a master host with a hostname of master.openshift.com
and using the default port of 8443
, the web console would be found at:
https://master.openshift.com:8443/console
Now that the install has been verified, run the following command on each master and node host to add the atomic-openshift packages back to the list of yum excludes on the host:
# atomic-openshift-excluder exclude
Multiple etcd Hosts
If you installed multiple etcd hosts:
On a etcd host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key cluster-health
Also verify the member list is correct:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key member list
Multiple masters Using Pacemaker
If you installed multiple masters using Pacemaker as a load balancer:
On a master host, determine which host is currently running as the active master:
# pcs status
After determining the active master, put the specified host into standby mode:
# pcs cluster standby <host1_name>
Verify the master is now running on another host:
# pcs status
After verifying the master is running on another node, re-enable the host on standby for normal operation by running:
# pcs cluster unstandby <host1_name>
Red Hat recommends that you also verify your installation by consulting the High Availability Add-on for Red Hat Enterprise Linux documentation.
Multiple masters Using HAProxy
If you installed multiple masters using HAProxy as a load balancer, browse to the following URL according to your [lb] section definition and check HAProxy’s status:
http://<lb_hostname>:9000
You can verify your installation by consulting the HAProxy Configuration documentation.
After your cluster is installed, you can install additional nodes (including masters) and add them to your cluster by running the scaleup.yml playbook. This playbook queries the master, generates and distributes new certificates for the new nodes, then runs the configuration playbooks on the new nodes only.
This process is similar to re-running the installer in the quick installation method to add nodes, however you have more configuration options available when using the advanced method and running the playbooks directly.
You must have an existing inventory file (for example, /etc/ansible/hosts)
that is representative of your current cluster configuration in order to run the
scaleup.yml playbook.
If you previously used the atomic-openshift-installer
command to run your
installation, you can check ~/.config/openshift/.ansible/hosts for the last
inventory file that the installer generated and use or modify that as needed as
your inventory file. You must then specify the file location with -i
when
calling ansible-playbook
later.
To add nodes to an existing cluster:
Ensure you have the latest playbooks by updating the atomic-openshift-utils package:
# yum update atomic-openshift-utils
Edit your /etc/ansible/hosts file and add new_nodes
to the
[OSEv3:children] section:
[OSEv3:children] masters nodes new_nodes
Then, create a [new_nodes] section much like the existing [nodes] section, specifying host information for any new nodes you want to add. For example:
[nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" [new_nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
See Configuring Host Variables for more options.
Now run the scaleup.yml playbook. If your inventory file is located
somewhere other than the default /etc/ansible/hosts, specify the location
with the -i option
:
For additional nodes:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml
For additional masters:
# ansible-playbook [-i /path/to/file] \ usr/share/ansible/openshift-ansible/playbooks/byo/openshift-master/scaleup.yml
After the playbook completes successfully, verify the installation.
Finally, move any hosts you had defined in the [new_nodes] section up into the [nodes] section (but leave the [new_nodes] section definition itself in place) so that subsequent runs using this inventory file are aware of the nodes but do not handle them as new nodes. For example:
[nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" [new_nodes]
You can uninstall OpenShift Enterprise hosts in your cluster by running the uninstall.yml playbook. This playbook deletes OpenShift Enterprise content installed by Ansible, including:
Configuration
Containers
Default templates and image streams
Images
RPM packages
The playbook will delete content for any hosts defined in the inventory file that you specify when running the playbook. If you want to uninstall OpenShift Enterprise across all hosts in your cluster, run the playbook using the inventory file you used when installing OpenShift Enterprise initially or ran most recently:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
You can also uninstall node components from specific hosts using the uninstall.yml playbook while leaving the remaining hosts and cluster alone:
This method should only be used when attempting to uninstall specific node hosts and not for specific masters or etcd hosts, which would require further configuration changes within the cluster. |
First follow the steps in Deleting Nodes to remove the node object from the cluster, then continue with the remaining steps in this procedure.
Create a different inventory file that only references those hosts. For example, to only delete content from one node:
[OSEv3:children] nodes (1) [OSEv3:vars] ansible_ssh_user=root deployment_type=openshift-enterprise [nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" (2)
1 | Only include the sections that pertain to the hosts you are interested in uninstalling. |
2 | Only include hosts that you want to uninstall. |
Specify that new inventory file using the -i
option when running the
uninstall.yml playbook:
# ansible-playbook -i /path/to/new/file \ /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
When the playbook completes, all OpenShift Enterprise content should be removed from any specified hosts.
The following are known issues for specified installation configurations.
Multiple masters
On failover, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See https://github.com/GoogleCloudPlatform/kubernetes/issues/10030 for details.
On failure of the Ansible installer, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are use bare metal machines:
Run the following on a master host with Pacemaker:
# pcs cluster destroy --all
Then, run the following on all node hosts:
# yum -y remove openshift openshift-* etcd docker # rm -rf /etc/origin /var/lib/openshift /etc/etcd \ /var/lib/etcd /etc/sysconfig/atomic-openshift* /etc/sysconfig/docker* \ /root/.kube/config /etc/ansible/facts.d /usr/share/openshift
Now that you have a working OpenShift instance, you can:
Configure authentication; by default, authentication is set to Deny All.
Deploy an integrated Docker registry.
Deploy a router.