[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
For production environments, a reference configuration implemented using Ansible playbooks is available as the advanced installation method for installing OpenShift hosts. Familiarity with Ansible is assumed, however you can use this configuration as a reference to create your own implementation using the configuration management tool of your choosing.
Alternatively, you can use the quick installation method for trial installations.
Running Ansible playbooks with the |
Before installing OpenShift, you must first see the Prerequisites topic to prepare your hosts, which includes verifying system and environment requirements per component type and properly installing and configuring Docker. It also includes installing Ansible version 1.8.4 or later, as the advanced installation method is based on Ansible playbooks and as such requires directly invoking Ansible.
After following the instructions in the Prerequisites topic, you can continue to Configuring Ansible.
The /etc/ansible/hosts file is Ansible’s inventory file for the playbook to use during the installation. The inventory file describes the configuration for your OpenShift cluster. You must replace the default contents of the file with your desired configuration.
The following sections describe commonly-used variables to set in your inventory file during an advanced installation, followed by example inventory files you can use as a starting point for your installation. The examples describe various environment topographies. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the Ansible installer.
Before running the Ansible installer, any hosts you intend to designate as masters during the installation process should also be configured as unschedulable nodes, in order to configure the masters as part of the OpenShift SDN. |
Configuring Host Variables
To assign environment variables to hosts during the Ansible install, indicate the desired variables in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:
[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:
Variable | Purpose |
---|---|
|
This variable overrides the internal cluster host name for the system. Use this when the system’s default IP address does not resolve to the system host name. |
|
This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
This variable overrides the cluster internal IP address for the system. Use this when using an interface that is not configured with the default route. |
|
This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
Configuring Cluster Variables
To assign environment variables during the Ansible install that apply more globally to your OpenShift cluster overall, indicate the desired variables in the /etc/ansible/hosts file on separate, single lines within the [OSEv3:vars] section. For example:
[OSEv3:vars] openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/openshift-passwd'}] osm_default_subdomain=apps.test.example.com
The following table describes variables for use with the Ansible installer that can be assigned cluster-wide:
Variable | Purpose |
---|---|
|
This variable sets the SSH user for the installer to use and defaults to root. This user should allow SSH-based authentication without requiring a password. If using SSH key-based authentication, then the key should be managed by an SSH agent. |
|
If |
|
This variable overrides the host name for the cluster, which defaults to the host name of the master. |
|
This variable overrides the public host name for the cluster, which defaults to the host name of the master. If you use an external load balancer, specify the address of the external load balancer. For example: ---- openshift_master_cluster_public_hostname=openshift-ansible.public.example.com ---- |
|
This variable overrides the identity provider, which defaults to Deny All. |
|
This variable overrides the default subdomain to use for exposed routes. |
|
This variable overrides the node selector that projects will use by default when placing pods. |
|
This variable overrides the SDN cluster network CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and should not conflict with existing network blocks in your infrastructure that pods may require access to. Defaults to 10.1.0.0/16 and can not be re-configured after deployment. |
|
This variable specifies the size of the per host subnet allocated for pod IPs by OpenShift SDN. Defaults to /8 which means that from the 10.1.0.0/16 cluster network a subnet of size /24 is allocated to each host (i.e., 10.1.0.0/24, 10.1.1.0/24, 10.1.2.0/24, and so on). This can not be re-configured after deployment. |
Configuring Node Host Labels
You can assign labels to node hosts during the Ansible install by configuring the /etc/ansible/hosts file. Labels are useful for determining the placement of pods onto nodes using the scheduler.
To assign labels to a node host during an Ansible install, use the
openshift_node_labels
variable with the desired labels added to the desired
node host entry in the [nodes] section. For example:
[nodes] node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
Host Name | Infrastructure Component to Install |
---|---|
master.example.com |
master and node |
node1.example.com |
Node |
node2.example.com |
You can see these example hosts present in the [masters] and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters and nodes groups [OSEv3:children] masters nodes # Set variables common for all OSEv3 hosts [OSEv3:vars] # SSH user, this user should allow ssh based auth without requiring a password ansible_ssh_user=root # If ansible_ssh_user is not root, ansible_sudo must be set to true #ansible_sudo=true product_type=openshift deployment_type=enterprise # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/openshift-passwd'}] # host group for masters [masters] master.example.com # host group for nodes, includes region info [nodes] master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Moving from a single master cluster to multiple masters after installation is not supported. |
The following table describes an example environment for a single master, three etcd hosts, and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com |
master and node |
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift’s embedded etcd is not supported. Also, moving from a single master cluster to multiple masters after installation is not supported. |
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters and nodes groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root product_type=openshift deployment_type=enterprise # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/openshift-passwd'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com |
master (clustered using Pacemaker) and node |
master2.example.com |
|
master3.example.com |
|
etcd1.example.com |
etcd |
etcd2.example.com |
|
etcd3.example.com |
|
node1.example.com |
Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift’s embedded etcd is not supported. |
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
# Create an OSEv3 group that contains the masters and nodes groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root product_type=openshift deployment_type=enterprise # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider # openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/openshift/openshift-passwd'}] # master cluster ha variables using pacemaker or RHEL HA openshift_master_cluster_method=pacemaker openshift_master_cluster_password=openshift_cluster openshift_master_cluster_vip=192.168.133.25 openshift_master_cluster_public_vip=192.168.133.25 openshift_master_cluster_hostname=openshift-master.example.com openshift_master_cluster_public_hostname=openshift-master.example.com # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Note the following when using this configuration:
Installing multiple masters requires that you configure a fencing device after running the installer.
When specifying multiple masters, the installer handles creating and starting
the high availability (HA) cluster. If during that process the pcs status
command indicates that an HA cluster already exists, the installer skips HA
cluster configuration.
Moving from a single master cluster to multiple masters after installation is not supported. |
After you’ve configured Ansible by defining an inventory file in /etc/ansible/hosts, you can run the Ansible installer:
# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
If you installed OpenShift using a configuration for multiple masters, you must configure a fencing device. See Fencing: Configuring STONITH in the High Availability Add-on for Red Hat Enterprise Linux documentation for instructions, then continue to Verifying the Installation.
After the installer completes, you can verify that the master is started and nodes are registered and reporting in Ready status by running the following as root:
# oc get nodes NAME LABELS STATUS master.example.com kubernetes.io/hostname=master.example.com,region=infra,zone=default Ready,SchedulingDisabled node1.example.com kubernetes.io/hostname=node1.example.com,region=primary,zone=east Ready node2.example.com kubernetes.io/hostname=node2.example.com,region=primary,zone=west Ready
Multiple etcd Hosts
If you installed multiple etcd hosts:
On a etcd host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/openshift/master/master.etcd-ca.crt \ --cert-file=/etc/openshift/master/master.etcd-client.crt \ --key-file=/etc/openshift/master/master.etcd-client.key cluster-health
Also verify the member list is correct:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/openshift/master/master.etcd-ca.crt \ --cert-file=/etc/openshift/master/master.etcd-client.crt \ --key-file=/etc/openshift/master/master.etcd-client.key member list
Multiple masters
If you installed multiple masters:
On a master host, determine which host is currently running as the active master:
# pcs status
After determining the active master, put the specified host into standby mode:
# pcs cluster standby <host1_name>
Verify the master is now running on another host:
# pcs status
After verifying the master is running on another node, re-enable the host on standby for normal operation by running:
# pcs cluster unstandby <host1_name>
Red Hat recommends that you also verify your installation by consulting the High Availability Add-on for Red Hat Enterprise Linux documentation.
After your cluster is installed, you can install additional nodes (including masters) and add them to your cluster by running the scaleup.yml playbook. This playbook queries the master, generates and distributes new certificates for the new nodes, then runs the configuration playbooks on the new nodes only.
This process is similar to re-running the installer in the quick installation method to add nodes, however you have more configuration options available when using the advanced method and running the playbooks directly.
You must have an existing inventory file (for example, /etc/ansible/hosts)
that is representative of your current cluster configuration in order to run the
scaleup.yml playbook.
If you previously used the atomic-openshift-installer
command to run your
installation, you can check ~/.config/openshift/.ansible/hosts for the last
inventory file that the installer generated and use or modify that as needed as
your inventory file. You must then specify the file location with -i
when
calling ansible-playbook
later.
The recommended maximum number of nodes is 300. |
To add nodes to an existing cluster:
Ensure you have the latest playbooks by updating the atomic-openshift-utils package:
# yum update atomic-openshift-utils
Edit your /etc/ansible/hosts file and add new_nodes
to the
[OSEv3:children] section:
[OSEv3:children] masters nodes new_nodes
Then, create a [new_nodes] section much like the existing [nodes] section, specifying host information for any new nodes you want to add. For example:
[nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" [new_nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
See Configuring Host Variables for more options.
Now run the scaleup.yml playbook. If your inventory file is located
somewhere other than the default /etc/ansible/hosts, specify the location
with the -i option
:
For additional nodes:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/scaleup.yml
For additional masters:
# ansible-playbook [-i /path/to/file] \ usr/share/ansible/openshift-ansible/playbooks/byo/openshift-master/scaleup.yml
After the playbook completes successfully, verify the installation.
Finally, move any hosts you had defined in the [new_nodes] section up into the [nodes] section (but leave the [new_nodes] section definition itself in place) so that subsequent runs using this inventory file are aware of the nodes but do not handle them as new nodes. For example:
[nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" [new_nodes]
The following are known issues for specified installation configurations.
Multiple masters
On failover, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See https://github.com/GoogleCloudPlatform/kubernetes/issues/10030 for details.
On failure of the Ansible installer, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are use bare metal machines:
Run the following on a master host:
# pcs cluster destroy --all
Then, run the following on all node hosts:
# yum -y remove openshift openshift-* etcd docker # rm -rf /etc/openshift /var/lib/openshift /etc/etcd \ /var/lib/etcd /etc/sysconfig/openshift* /etc/sysconfig/docker* \ /root/.kube/config /etc/ansible/facts.d /usr/share/openshift
Now that you have a working OpenShift instance, you can:
Configure authentication; by default, authentication is set to Deny All.
Deploy an integrated Docker registry.
Deploy a router.
Populate your OpenShift installation with a useful set of Red Hat-provided image streams and templates.