This is a cache of https://docs.openshift.com/container-platform/3.6/install_config/install/advanced_install.html. It is a snapshot of the page at 2024-11-23T03:39:15.449+0000.
Advanced Installation - Installing a Cluster | Installation and Configuration | OpenShift Container Platform 3.6
×

Overview

A reference configuration implemented using Ansible playbooks is available as the advanced installation method for installing a OpenShift Container Platform cluster. Familiarity with Ansible is assumed, however you can use this configuration as a reference to create your own implementation using the configuration management tool of your choosing.

While RHEL Atomic Host is supported for running containerized OpenShift Container Platform services, the advanced installation method utilizes Ansible, which is not available in RHEL Atomic Host, and must therefore be run from a RHEL 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift Container Platform cluster, but it can be.

Alternatively, a containerized version of the installer is available as a system container, which is currently a Technology Preview feature.

Alternatively, you can use the quick installation method if you prefer an interactive installation experience.

To install OpenShift Container Platform as a stand-alone registry, see Installing a Stand-alone Registry.

Running Ansible playbooks with the --tags or --check options is not supported by Red Hat.

Before You Begin

Before installing OpenShift Container Platform, you must first see the Prerequisites and Host Preparation topics to prepare your hosts. This includes verifying system and environment requirements per component type and properly installing and configuring Docker. It also includes installing Ansible version 2.2.0 or later, as the advanced installation method is based on Ansible playbooks and as such requires directly invoking Ansible.

If you are interested in installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see Installing on Containerized Hosts to ensure that you understand the differences between these methods, then return to this topic to continue.

For large-scale installs, including suggestions for optimizing install time, see the Scaling and Performance Guide.

After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue in this topic to Configuring Ansible Inventory Files.

Configuring Ansible Inventory Files

The /etc/ansible/hosts file is Ansible’s inventory file for the playbook used to install OpenShift Container Platform. The inventory file describes the configuration for your OpenShift Container Platform cluster. You must replace the default contents of the file with your desired configuration.

The following sections describe commonly-used variables to set in your inventory file during an advanced installation, followed by example inventory files you can use as a starting point for your installation.

Many of the Ansible variables described are optional. Accepting the default values should suffice for development environments, but for production environments it is recommended you read through and become familiar with the various options available.

The example inventories describe various environment topographies, including using multiple masters for high availability. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the advanced installation.

Image Version Policy

Images require a version number policy in order to maintain updates. See the Image Version Tag Policy section in the Architecture Guide for more information.

Configuring Cluster Variables

To assign environment variables during the Ansible install that apply more globally to your OpenShift Container Platform cluster overall, indicate the desired variables in the /etc/ansible/hosts file on separate, single lines within the [OSEv3:vars] section. For example:

[OSEv3:vars]

openshift_master_identity_providers=[{'name': 'htpasswd_auth',
'login': 'true', 'challenge': 'true',
'kind': 'HTPasswdPasswordIdentityProvider',
'filename': '/etc/origin/master/htpasswd'}]

openshift_master_default_subdomain=apps.test.example.com

If a parameter value in the Ansible inventory file contains special characters, such as #, { or }, you must double-escape the value (that is enclose the value in both single and double quotation marks). For example, to use mypasswordwith###hashsigns as a value for the variable openshift_cloudprovider_openstack_password, declare it as openshift_cloudprovider_openstack_password='"mypasswordwith###hashsigns"' in the Ansible host inventory file.

The following table describes variables for use with the Ansible installer that can be assigned cluster-wide:

Table 1. Cluster Variables
Variable Purpose

ansible_ssh_user

This variable sets the SSH user for the installer to use and defaults to root. This user should allow SSH-based authentication without requiring a password. If using SSH key-based authentication, then the key should be managed by an SSH agent.

ansible_become

If ansible_ssh_user is not root, this variable must be set to true and the user must be configured for passwordless sudo.

debug_level

This variable sets which INFO messages are logged to the systemd-journald.service. Set one of the following:

  • 0 to log errors and warnings only

  • 2 to log normal information (This is the default level.)

  • 4 to log debugging-level information

  • 6 to log API-level debugging information (request / response)

  • 8 to log body-level API debugging information

For more information on debug log levels, see Configuring Logging Levels.

containerized

If set to true, containerized OpenShift Container Platform services are run on all target master and node hosts in the cluster instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details. Containerized installations are supported starting in OpenShift Container Platform 3.1.1.

openshift_master_admission_plugin_config

This variable sets the parameter and arbitrary JSON values as per the requirement in your inventory hosts file. For example:

openshift_master_admission_plugin_config={"ClusterResourceOverride":{"configuration":{"apiVersion":"v1","kind":"ClusterResourceOverrideConfig","memoryRequestToLimitPercent":"25","cpuRequestToLimitPercent":"25","limitCPUToMemoryPercent":"200"}}}

openshift_master_audit_config

This variable enables API service auditing. See Audit Configuration for more information.

openshift_master_cluster_hostname

This variable overrides the host name for the cluster, which defaults to the host name of the master.

openshift_master_cluster_public_hostname

This variable overrides the public host name for the cluster, which defaults to the host name of the master. If you use an external load balancer, specify the address of the external load balancer.

For example:

---- openshift_master_cluster_public_hostname=openshift-ansible.public.example.com ----

openshift_master_cluster_method

Optional. This variable defines the HA method when deploying multiple masters. Supports the native method. See Multiple masters for more information.

openshift_rolling_restart_mode

This variable enables rolling restarts of HA masters (i.e., masters are taken down one at a time) when running the upgrade playbook directly. It defaults to services, which allows rolling restarts of services on the masters. It can instead be set to system, which enables rolling, full system restarts and also works for single master clusters.

os_sdn_network_plugin_name

This variable configures which OpenShift SDN plug-in to use for the pod network, which defaults to redhat/openshift-ovs-subnet for the standard SDN plug-in. Set the variable to redhat/openshift-ovs-multitenant to use the multitenant plug-in.

openshift_master_identity_providers

This variable sets the identity provider. The default value is Deny All. If you use a supported identity provider, configure OpenShift Container Platform to use it.

openshift_master_named_certificates

These variables are used to configure custom certificates which are deployed as part of the installation. See Configuring Custom Certificates for more information.

openshift_master_overwrite_named_certificates

openshift_hosted_router_certificate

Provide the location of the custom certificates for the hosted router.

openshift_hosted_registry_cert_expire_days

Validity of the auto-generated registry certificate in days. Defaults to 730 (2 years).

openshift_ca_cert_expire_days

Validity of the auto-generated CA certificate in days. Defaults to 1825 (5 years).

openshift_node_cert_expire_days

Validity of the auto-generated node certificate in days. Defaults to 730 (2 years).

openshift_master_cert_expire_days

Validity of the auto-generated master certificate in days. Defaults to 730 (2 years).

etcd_ca_default_days

Validity of the auto-generated separate etcd certificates in days. Controls validity for etcd CA, peer, server and client certificates. Defaults to 1825 (5 years).

os_firewall_use_firewalld

Set to true to use firewalld instead of the default iptables. Not available on RHEL Atomic Host. See the Configuring the Firewall section for more information.

openshift_master_session_name

These variables override defaults for session options in the OAuth configuration. See Configuring Session Options for more information.

openshift_master_session_max_seconds

openshift_master_session_auth_secrets

openshift_master_session_encryption_secrets

openshift_portal_net

This variable configures the subnet in which services will be created within the OpenShift Container Platform SDN. This network block should be private and must not conflict with any existing network blocks in your infrastructure to which pods, nodes, or the master may require access to, or the installation will fail. Defaults to 172.30.0.0/16, and cannot be re-configured after deployment. If changing from the default, avoid 172.17.0.0/16, which the docker0 network bridge uses by default, or modify the docker0 network.

openshift_master_default_subdomain

This variable overrides the default subdomain to use for exposed routes.

openshift_master_image_policy_config

Sets imagePolicyConfig in the master configuration. See Image Configuration for details.

openshift_node_proxy_mode

This variable specifies the service proxy mode to use: either iptables for the default, pure-iptables implementation, or userspace for the user space proxy.

openshift_router_selector

Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details.

openshift_registry_selector

Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details.

openshift_template_service_broker_namespaces

This variable enables the template service broker by specifying one or more namespaces whose templates will be served by the broker.

template_service_broker_selector

Default node selector for automatically deploying template service broker pods, for example: {"region": "infra"}. See Configuring Node Host Labels for details.

osm_default_node_selector

This variable overrides the node selector that projects will use by default when placing pods.

osm_cluster_network_cidr

This variable overrides the SDN cluster network CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and must not conflict with existing network blocks in your infrastructure to which pods, nodes, or the master may require access. Defaults to 10.128.0.0/14 and cannot be arbitrarily re-configured after deployment, although certain changes to it can be made in the SDN master configuration.

osm_host_subnet_length

This variable specifies the size of the per host subnet allocated for pod IPs by OpenShift Container Platform SDN. Defaults to 9 which means that a subnet of size /23 is allocated to each host; for example, given the default 10.128.0.0/14 cluster network, this will allocate 10.128.0.0/23, 10.128.2.0/23, 10.128.4.0/23, and so on. This cannot be re-configured after deployment.

openshift_use_flannel

This variable enables flannel as an alternative networking layer instead of the default SDN. If enabling flannel, disable the default SDN with the openshift_use_openshift_sdn variable. For more information, see Using Flannel.

openshift_docker_additional_registries

OpenShift Container Platform adds the specified additional registry or registries to the docker configuration. These are the registries to search.

openshift_docker_insecure_registries

OpenShift Container Platform adds the specified additional insecure registry or registries to the docker configuration. For any of these registries, secure sockets layer (SSL) is not verified. Also, add these registries to openshift_docker_additional_registries.

openshift_docker_blocked_registries

OpenShift Container Platform adds the specified blocked registry or registries to the docker configuration. Block the listed registries. Setting this to all blocks everything not in the other variables.

openshift_metrics_hawkular_hostname

This variable sets the host name for integration with the metrics console by overriding metricsPublicURL in the master configuration for cluster metrics. If you alter this variable, ensure the host name is accessible via your router. See Configuring Cluster Metrics for details.

openshift_image_tag

Use this variable to specify a container image tag to install or configure.

openshift_pkg_version

Use this variable to specify an RPM version to install or configure.

If you modify the openshift_image_tag or the openshift_pkg_version variables after the cluster is set up, then an upgrade can be triggered, resulting in downtime.

  • If openshift_image_tag is set, its value is used for all hosts in containerized environments, even those that have another version installed. If

  • openshift_pkg_version is set, its value is used for all hosts in RPM-based environments, even those that have another version installed.

Configuring Deployment Type

Various defaults used throughout the playbooks and roles used by the installer are based on the deployment type configuration (usually defined in an Ansible inventory file).

Ensure the deployment_type parameter in your inventory file’s [OSEv3:vars] section is set to openshift-enterprise to install the OpenShift Container Platform variant:

[OSEv3:vars]
deployment_type=openshift-enterprise

Configuring Host Variables

To assign environment variables to hosts during the Ansible installation, indicate the desired variables in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:

[masters]
ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com

The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:

Table 2. Host Variables
Variable Purpose

openshift_hostname

This variable overrides the internal cluster host name for the system. Use this when the system’s default IP address does not resolve to the system host name.

openshift_public_hostname

This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT).

openshift_ip

This variable overrides the cluster internal IP address for the system. Use this when using an interface that is not configured with the default route. This variable can also be used for etcd.

openshift_public_ip

This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT).

containerized

If set to true, containerized OpenShift Container Platform services are run on the target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details. Containerized installations are supported starting in OpenShift Container Platform 3.1.1.

openshift_node_labels

This variable adds labels to nodes during installation. See Configuring Node Host Labels for more details.

openshift_node_kubelet_args

This variable is used to configure kubeletArguments on nodes, such as arguments used in container and image garbage collection, and to specify resources per node. kubeletArguments are key value pairs that are passed directly to the Kubelet that match the Kubelet’s command line arguments. kubeletArguments are not migrated or validated and may become invalid if used. These values override other settings in node configuration which may cause invalid configurations. Example usage: {'image-gc-high-threshold': ['90'],'image-gc-low-threshold': ['80']}.

openshift_hosted_router_selector

Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details.

openshift_registry_selector

Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details.

openshift_docker_options

This variable configures additional docker options within /etc/sysconfig/docker, such as options used in Managing Container Logs. Use json-file or journald. The default is journald. Example usage:

"--log-driver json-file --log-opt max-size=1M --log-opt max-file=3"
"--log-driver journald"

openshift_schedulable

This variable configures whether the host is marked as a schedulable node, meaning that it is available for placement of new pods. See Configuring Schedulability on masters.

Configuring Project Parameters

To configure the default project settings, configure the following variables in the /etc/ansible/hosts file:

Table 3. Project Parameters
Parameter Description Type Default Value

osm_project_request_message

The string presented to a user if they are unable to request a project via the projectrequest API endpoint.

String

null

osm_project_request_template

The template to use for creating projects in response to a projectrequest. If you do not specify a value, the default template is used.

String with the format <namespace>/<template>

null

osm_mcs_allocator_range

Defines the range of MCS categories to assign to namespaces. If this value is changed after startup, new projects might receive labels that are already allocated to other projects. The prefix can be any valid SELinux set of terms, including user, role, and type. However, leaving the prefix at its default allows the server to set them automatically. For example, s0:/2 allocates labels from s0:c0,c0 to s0:c511,c511 whereas s0:/2,512 allocates labels from s0:c0,c0,c0 to s0:c511,c511,511.

String with the format <prefix>/<numberOfLabels>[,<maxCategory>]

s0:/2

osm_mcs_labels_per_project

Defines the number of labels to reserve per project.

Integer

5

osm_uid_allocator_range

Defines the total set of Unix user IDs (UIDs) automatically allocated to projects and the size of the block that each namespace gets. For example, 1000-1999/10 allocates ten UIDs per namespace and can allocate up to 100 blocks before running out of space. The default value is the expected size of the ranges for container images when user namespaces are started.

String in the format <block_range>/<number_of_UIDs>

1000000000-1999999999/10000

Configuring master API and Console Ports

To configure the default ports used by the master API and web console, configure the following variables in the /etc/ansible/hosts file:

Table 4. master API and Console Ports
Variable Purpose

openshift_master_api_port

This variable sets the port number to access the OpenShift Container Platform API.

openshift_master_console_port

This variable sets the console port number to access the OpenShift Container Platform console with a web browser.

For example:

openshift_master_api_port=3443
openshift_master_console_port=8756

Configuring Cluster Pre-install Checks

Pre-install checks are a set of diagnostic tasks that run as part of the openshift_health_checker Ansible role. They run prior to an Ansible installation of OpenShift Container Platform, ensure that required inventory values are set, and identify potential issues on a host that can prevent or interfere with a successful installation.

The following table describes available pre-install checks that will run before every Ansible installation of OpenShift Container Platform:

Table 5. Pre-install Checks
Check Name Purpose

memory_availability

This check ensures that a host has the recommended amount of memory for the specific deployment of OpenShift Container Platform. Default values have been derived from the latest installation documentation. A user-defined value for minimum memory requirements may be set by setting the openshift_check_min_host_memory_gb cluster variable in your inventory file.

disk_availability

This check only runs on etcd, master, and node hosts. It ensures that the mount path for an OpenShift Container Platform installation has sufficient disk space remaining. Recommended disk values are taken from the latest installation documentation. A user-defined value for minimum disk space requirements may be set by setting openshift_check_min_host_disk_gb cluster variable in your inventory file.

docker_storage

Only runs on hosts that depend on the docker daemon (nodes and containerized installations). Checks that docker's total usage does not exceed a user-defined limit. If no user-defined limit is set, docker's maximum usage threshold defaults to 90% of the total size available. The threshold limit for total percent usage can be set with a variable in your inventory file: max_thinpool_data_usage_percent=90. A user-defined limit for maximum thinpool usage may be set by setting the max_thinpool_data_usage_percent cluster variable in your inventory file.

docker_storage_driver

Ensures that the docker daemon is using a storage driver supported by OpenShift Container Platform. If the devicemapper storage driver is being used, the check additionally ensures that a loopback device is not being used.

docker_image_availability

Attempts to ensure that images required by an OpenShift Container Platform installation are available either locally or in at least one of the configured container image registries on the host machine.

openshift_release

Specifies the generic release of OpenShift Container Platform for containerized installations. For RPM installations, set a package_availability value.

package_version

Runs on yum-based systems determining if multiple releases of a required OpenShift Container Platform package are available. Having multiple releases of a package available during an enterprise installation of OpenShift suggests that there are multiple yum repositories enabled for different releases, which may lead to installation problems. This check is skipped if the openshift_release variable is not defined in the inventory file.

package_availability

Runs prior to non-containerized installations of OpenShift Container Platform. Ensures that RPM packages required for the current installation are available.

package_update

Checks whether a yum update or package installation will succeed, without actually performing it or running yum on the host.

To disable specific pre-install checks, include the variable openshift_disable_check with a comma-delimited list of check names in your inventory file. For example:

openshift_disable_check=memory_availability,disk_availability

A similar set of health checks meant to run for diagnostics on existing clusters can be found in Ansible-based Health Checks. Another set of checks for checking certificate expiration can be found in Redeploying Certificates.

Configuring System Containers

All system container components are Technology Preview features in OpenShift Container Platform 3.6. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.6. During this phase, they are only meant for use with new cluster installations in non-production environments.

System containers provide a way to containerize services that need to run before the docker daemon is running. They are Docker-formatted containers that use:

System containers are therefore stored and run outside of the traditional docker service. For more details on system container technology, see Running System Containers in the Red Hat Enterprise Linux Atomic Host: Managing Containers documentation.

You can configure your OpenShift Container Platform installation to run certain components as system containers instead of their RPM or standard containerized methods. Currently, the docker and etcd components can be run as system containers in OpenShift Container Platform.

System containers are currently OS-specific because they require specific versions of atomic and systemd. For example, different system containers are created for RHEL, Fedora, or CentOS. Ensure that the system containers you are using match the OS of the host they will run on. OpenShift Container Platform only supports RHEL and RHEL Atomic as the host OS, so by default system containers built for RHEL are used.

Running Docker as a System Container

All system container components are Technology Preview features in OpenShift Container Platform 3.6. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.6. During this phase, they are only meant for use with new cluster installations in non-production environments.

The traditional method for using docker in an OpenShift Container Platform cluster is an RPM package installation. For Red Hat Enterprise Linux (RHEL) systems, it must be specifically installed; for RHEL Atomic Host systems, it is provided by default.

However, you can configure your OpenShift Container Platform installation to alternatively run docker on node hosts as a system container. When using the system container method, the container-engine container image and systemd service is used on the host instead of the docker package and service.

To run docker as a system container:

  1. Because the default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, for any RHEL systems you must still configure a thin pool logical volume for docker to use before running the OpenShift Container Platform installation. You can skip these steps for any RHEL Atomic Host systems.

    For any RHEL systems, perform the steps described in the following sections:

    After completing the storage configuration steps, you can leave the RPM installed.

  2. Set the following cluster variable to True in your inventory file in the [OSEv3:vars] section:

    openshift_docker_use_system_container=True

When using the system container method, the following inventory variables for docker are ignored:

  • docker_version

  • docker_upgrade

Further, the following inventory variable must not be used:

  • openshift_docker_options

You can also force docker in the system container to use a specific container registry and repository when pulling the container-engine image instead of from the default registry.access.redhat.com/openshift3/. To do so, set the following cluster variable in your inventory file in the [OSEv3:vars] section:

openshift_docker_systemcontainer_image_registry_override="registry.example.com/myrepo/"

Running etcd as a System Container

All system container components are Technology Preview features in OpenShift Container Platform 3.6. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.6. During this phase, they are only meant for use with new cluster installations in non-production environments.

When using the RPM-based installation method for OpenShift Container Platform, etcd is installed using RPM packages on any RHEL systems. When using the containerized installation method, the rhel7/etcd image is used instead for RHEL or RHEL Atomic Hosts.

However, you can configure your OpenShift Container Platform installation to alternatively run etcd as a system container. Whereas the standard containerized method uses a systemd service named etcd_container, the system container method uses the service name etcd, same as the RPM-based method. The data directory for etcd using this method is /var/lib/etcd/etcd.etcd/etcd.etcd/member.

To run etcd as a system container, set the following cluster variable in your inventory file in the [OSEv3:vars] section:

openshift_use_etcd_system_container=True

Configuring a Registry Location

If you are using an image registry other than the default at registry.access.redhat.com, specify the desired registry within the /etc/ansible/hosts file.

oreg_url={registry}/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams=true
Table 6. Registry Variables
Variable Purpose

oreg_url

Set to the alternate image location. Necessary if you are not using the default registry at registry.access.redhat.com.

openshift_examples_modify_imagestreams

Set to true if pointing to a registry other than the default. Modifies the image stream location to the value of oreg_url.

openshift_docker_additional_registries

Specify the additional registry or registries.

For example:

oreg_url=example.com/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams=true
openshift_docker_additional_registries=example.com

Configuring the Registry Console

If you are using a Cockpit registry console image other than the default or require a specific version of the console, specify the desired registry within the /etc/ansible/hosts file.

openshift_cockpit_deployer_prefix=<registry-name>/<namespace>/
openshift_cockpit_deployer_version=<cockpit-image-tag>
Table 7. Registry Variables
Variable Purpose

openshift_cockpit_deployer_prefix

Specify the URL and path to the directory where the image is located.

openshift_cockpit_deployer_version

Specify the Cockpit image verion.

For example: If your image is at registry.example.com/openshift3/registry-console and you require version 1.4.1, enter:

openshift_cockpit_deployer_prefix='registry.example.com/openshift3/'
openshift_cockpit_deployer_version='1.4.1'

Configuring Registry Storage

There are several options for enabling registry storage when using the advanced install:

Option A: NFS Host Group

When the following variables are set, an NFS volume is created during an advanced install with the path <nfs_directory>/<volume_name> on the host within the [nfs] host group. For example, the volume path using these options would be /exports/registry:

[OSEv3:vars]

openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi
Option B: External NFS Host

To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host. The remote volume path using the following options would be nfs.example.com:/exports/registry.

[OSEv3:vars]

openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_host=nfs.example.com
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi
Option C: OpenStack Platform

An OpenStack storage configuration must already exist.

openshift_hosted_registry_storage_kind=openstack
openshift_hosted_registry_storage_access_modes=['ReadWriteOnce']
openshift_hosted_registry_storage_openstack_filesystem=ext4
openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57
openshift_hosted_registry_storage_volume_size=10Gi
Option D: AWS or Another S3 Storage Solution

The simple storage solution (S3) bucket must already exist.

#openshift_hosted_registry_storage_kind=object
#openshift_hosted_registry_storage_provider=s3
#openshift_hosted_registry_storage_s3_accesskey=access_key_id
#openshift_hosted_registry_storage_s3_secretkey=secret_access_key
#openshift_hosted_registry_storage_s3_bucket=bucket_name
#openshift_hosted_registry_storage_s3_region=bucket_region
#openshift_hosted_registry_storage_s3_chunksize=26214400
#openshift_hosted_registry_storage_s3_rootdirectory=/registry
#openshift_hosted_registry_pullthrough=true
#openshift_hosted_registry_acceptschema2=true
#openshift_hosted_registry_enforcequota=true

If you are using a different S3 service, such as Minio or ExoScale, also add the region endpoint parameter:

openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/

Configuring Router Sharding

Router sharding support is enabled by supplying the correct data to the inventory. The variable openshift_hosted_routers holds the data, which is in the form of a list. If no data is passed, then a default router is created. There are multiple combinations of router sharding. The following example supports routers on separate nodes:

openshift_hosted_routers=[{'name': 'router1', 'certificate': {'certfile': '/path/to/certificate/abc.crt',
'keyfile': '/path/to/certificate/abc.key', 'cafile':
'/path/to/certificate/ca.crt'}, 'replicas': 1, 'serviceaccount': 'router',
'namespace': 'default', 'stats_port': 1936, 'edits': [], 'images':
'openshift3/ose-${component}:${version}', 'selector': 'type=router1', 'ports':
['80:80', '443:443']},
{'name': 'router2', 'certificate': {'certfile': '/path/to/certificate/xyz.crt',
'keyfile': '/path/to/certificate/xyz.key', 'cafile':
'/path/to/certificate/ca.crt'}, 'replicas': 1, 'serviceaccount': 'router',
'namespace': 'default', 'stats_port': 1936, 'edits': [{'action': 'append',
'key': 'spec.template.spec.containers[0].env', 'value': {'name': 'ROUTE_LABELS',
'value': 'route=external'}}], 'images':
'openshift3/ose-${component}:${version}', 'selector': 'type=router2', 'ports':
['80:80', '443:443']}]

Configuring GlusterFS Persistent Storage

GlusterFS can be configured to provide peristent storage and dynamic provisioning for OpenShift Container Platform. It can be used both containerized within OpenShift Container Platform and non-containerized on its own nodes.

Configuring Containerized GlusterFS Persistent Storage

This option utilizes Red Hat Container Native Storage (CNS) for configuring containerized GlusterFS persistent storage in OpenShift Container Platform.

See Containerized GlusterFS Considerations for specific host preparations and prerequisites.

  1. In your inventory file, add glusterfs in the [OSEv3:children] section to enable the [glusterfs] group:

    [OSEv3:children]
    masters
    nodes
    glusterfs
  2. (Optional) Include any of the following role variables in the [OSEv3:vars] section you wish to change:

    [OSEv3:vars]
    openshift_storage_glusterfs_namespace=glusterfs (1)
    openshift_storage_glusterfs_name=storage (2)
    1 The project (namespace) to host the storage pods. Defaults to glusterfs.
    2 A name to identify the GlusterFS cluster, which will be used in resource names. Defaults to storage.
  3. Add a [glusterfs] section with entries for each storage node that will host the GlusterFS storage and include the glusterfs_ip and glusterfs_devices parameters in the form:

    <hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    [glusterfs]
    192.168.10.11 glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    192.168.10.12 glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    192.168.10.13 glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'

    Set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Set glusterfs_ip to the IP address that will be used by pods to communicate with the GlusterFS node.

  4. Add the hosts listed under [glusterfs] to the [nodes] group as well:

    [nodes]
    192.168.10.14
    192.168.10.15
    192.168.10.16
  5. After completing the cluster installation per Running the Advanced Installation, run the following from a master to verify the necessary objects were successfully created:

    1. Verfiy that the GlusterFS StorageClass was created:

      # oc get storageclass
      NAME                  TYPE
      glusterfs-storage     kubernetes.io/glusterfs
    2. Verify that the route was created:

      # oc get routes
      NAME            HOST/PORT                                     PATH           SERVICES   PORT   TERMINATION   WILDCARD
      heketi-glusterfs-route  heketi-glusterfs-default.cloudapps.example.com  heketi-glusterfs <all>             None

      The name for the route will be heketi-glusterfs-route unless the default glusterfs value was overridden using the openshift_glusterfs_storage_name variable in the inventory file.

    3. Use curl to verify the route works correctly:

      # curl http://heketi-glusterfs-default.cloudapps.example.com/hello
      Hello from Heketi.

After successful installation, see Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment to check the status of the GlusterFS clusters.

Dynamic provisioning of GlusterFS volumes can occur by creating a PVC to request storage.

Configuring the OpenShift Container Registry

Additional configuration options are available at installation time for the OpenShift Container Registry.

If no registry storage options are used, the default OpenShift Container Platform registry is ephermal and all data will be lost if the pod no longer exists. OpenShift Container Platform also supports a single node NFS-backed registry, but this option lacks redundancy and reliability compared with the GlusterFS-backed option.

Configuring a Containerized GlusterFS-Backed Registry

Similar to configuring containerized GlusterFS for persistent storage, GlusterFS storage can be configured and deployed for an OpenShift Container Registry during the initial installation of the cluster to offer redundant and more reliable storage for the registry.

See Containerized GlusterFS Considerations for specific host preparations and prerequisites.

Configuration of storage for an OpenShift Container Registry is very similar to configuration for GlusterFS persistent storage in that it can be either containerized or non-containerized. For this containerized method, the following exceptions and additions apply:

  1. In your inventory file, add glusterfs_registry in the [OSEv3:children] section to enable the [glusterfs_registry] group:

    [OSEv3:children]
    masters
    nodes
    glusterfs_registry
  2. Add the following role variable in the [OSEv3:vars] section to enable the GlusterFS-backed registry, provided that the glusterfs_registry group name and the [glusterfs_registry] group exist:

    [OSEv3:vars]
    openshift_hosted_registry_storage_kind=glusterfs
  3. It is recommended to have at least three registry pods, so set the following role variable in the [OSEv3:vars] section:

    openshift_hosted_registry_replicas=3
  4. If you want to specify the volume size for the GlusterFS-backed registry, set the following role variable in [OSEv3:vars] section:

    openshift_hosted_registry_storage_volume_size=10Gi

    If unspecified, the volume size defaults to 5Gi.

  5. The installer will deploy the OpenShift Container Registry pods and associated routers on nodes containing the region=infra label. Add this label on at least one node entry in the [nodes] section, otherwise the registry deployment will fail. For example:

    [nodes]
    192.168.10.14 openshift_schedulable=True openshift_node_labels="{'region': 'infra'}"
  6. Add a [glusterfs_registry] section with entries for each storage node that will host the GlusterFS-backed registry and include the glusterfs_ip and glusterfs_devices parameters in the form:

    <hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    [glusterfs_registry]
    192.168.10.14 glusterfs_ip=192.168.10.14 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    192.168.10.15 glusterfs_ip=192.168.10.15 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    192.168.10.16 glusterfs_ip=192.168.10.16 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'

    Set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Set glusterfs_ip to the IP address that will be used by pods to communicate with the GlusterFS node.

  7. Add the hosts listed under [glusterfs_registry] to the [nodes] group as well:

    [nodes]
    192.168.10.14
    192.168.10.15
    192.168.10.16

After successful installation, see Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment to check the status of the GlusterFS clusters.

Configuring Global Proxy Options

If your hosts require use of a HTTP or HTTPS proxy in order to connect to external hosts, there are many components that must be configured to use the proxy, including masters, Docker, and builds. Node services only connect to the master API requiring no external access and therefore do not need to be configured to use a proxy.

In order to simplify this configuration, the following Ansible variables can be specified at a cluster or host level to apply these settings uniformly across your environment.

See Configuring Global Build Defaults and Overrides for more information on how the proxy environment is defined for builds.

Table 8. Cluster Proxy Variables
Variable Purpose

openshift_http_proxy

This variable specifies the HTTP_PROXY environment variable for masters and the Docker daemon.

openshift_https_proxy

This variable specifices the HTTPS_PROXY environment variable for masters and the Docker daemon.

openshift_no_proxy

This variable is used to set the NO_PROXY environment variable for masters and the Docker daemon. Provide a comma-separated list of host names, domain names, or wildcard host names that do not use the defined proxy. By default, this list is augmented with the list of all defined OpenShift Container Platform host names.

openshift_generate_no_proxy_hosts

This boolean variable specifies whether or not the names of all defined OpenShift hosts and *.cluster.local should be automatically appended to the NO_PROXY list. Defaults to true; set it to false to override this option.

openshift_builddefaults_http_proxy

This variable defines the HTTP_PROXY environment variable inserted into builds using the BuildDefaults admission controller. If you do not define this parameter but define the openshift_http_proxy parameter, the openshift_http_proxy value is used. Set the openshift_builddefaults_http_proxy value to False to disable default http proxy for builds regardless of the openshift_http_proxy value.

openshift_builddefaults_https_proxy

This variable defines the HTTPS_PROXY environment variable inserted into builds using the BuildDefaults admission controller. If you do not define this parameter but define the openshift_http_proxy parameter, the openshift_https_proxy value is used. Set the openshift_builddefaults_https_proxy value to False to disable default https proxy for builds regardless of the openshift_https_proxy value.

openshift_builddefaults_no_proxy

This variable defines the NO_PROXY environment variable inserted into builds using the BuildDefaults admission controller. Set the openshift_builddefaults_no_proxy value to False to disable default no proxy settings for builds regardless of the openshift_no_proxy value.

openshift_builddefaults_git_http_proxy

This variable defines the HTTP proxy used by git clone operations during a build, defined using the BuildDefaults admission controller. Set the openshift_builddefaults_git_http_proxy value to False to disable default http proxy for git clone operations during a build regardless of the openshift_http_proxy value.

openshift_builddefaults_git_https_proxy

This variable defines the HTTPS proxy used by git clone operations during a build, defined using the BuildDefaults admission controller. Set the openshift_builddefaults_git_https_proxy value to False to disable default https proxy for git clone operations during a build regardless of the openshift_https_proxy value.

If any of:

  • openshift_no_proxy

  • openshift_https_proxy

  • openshift_http_proxy

are set, then all cluster hosts will have an automatically generated NO_PROXY environment variable injected into several service configuration scripts. The default .svc domain and your cluster’s dns_domain (typically .cluster.local) will also be added.

Setting openshift_generate_no_proxy_hosts to false in your inventory will not disable the automatic addition of the .svc domain and the cluster domain. These are required and added automatically if any of the above listed proxy parameters are set.

Configuring the Firewall

  • If you are changing the default firewall, ensure that each host in your cluster is using the same firewall type to prevent inconsistencies.

  • Do not use firewalld with the OpenShift Container Platform installed on Atomic Host. firewalld is not supported on Atomic host.

While iptables is the default firewall, firewalld is recommended for new installations.

OpenShift Container Platform uses iptables as the default firewall, but you can configure your cluster to use firewalld during the install process.

Because iptables is the default firewall, OpenShift Container Platform is designed to have it configured automatically. However, iptables rules can break OpenShift Container Platform if not configured correctly. The advantages of firewalld include allowing multiple objects to safely share the firewall rules.

To use firewalld as the firewall for an OpenShift Container Platform installation, add the os_firewall_use_firewalld variable to the list of configuration variables in the Ansible host file at install:

[OSEv3:vars]
os_firewall_use_firewalld=True (1)
1 Setting this variable to true opens the required ports and adds rules to the default zone, ensuring that firewalld is configured correctly.

Using the firewalld default configuration comes with limited configuration options, and cannot be overridden. For example, while you can set up a storage network with interfaces in multiple zones, the interface that nodes communicate on must be in the default zone.

Configuring Schedulability on masters

Any hosts you designate as masters during the installation process should also be configured as nodes so that the masters are configured as part of the OpenShift SDN. You must do so by adding entries for these hosts to the [nodes] section:

[nodes]
master.example.com

In order to ensure that your masters are not burdened with running pods, they are automatically marked unschedulable by default by the installer, meaning that new pods cannot be placed on the hosts. This is the same as setting the openshift_schedulable=False host variable.

You can manually set a master host to schedulable during installation using the openshift_schedulable=true host variable, though this is not recommended in production environments:

[nodes]
master.example.com openshift_schedulable=true

If you want to change the schedulability of a host post-installation, see Marking Nodes as Unschedulable or Schedulable.

Configuring Node Host Labels

You can assign labels to node hosts during the Ansible install by configuring the /etc/ansible/hosts file. Labels are useful for determining the placement of pods onto nodes using the scheduler. Other than region=infra (discussed in Configuring Dedicated Infrastructure Nodes), the actual label names and values are arbitrary and can be assigned however you see fit per your cluster’s requirements.

To assign labels to a node host during an Ansible install, use the openshift_node_labels variable with the desired labels added to the desired node host entry in the [nodes] section. In the following example, labels are set for a region called primary and a zone called east:

[nodes]
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"

Configuring Dedicated Infrastructure Nodes

The openshift_router_selector and openshift_registry_selector Ansible settings determine the label selectors used when placing registry and router pods. They are set to region=infra by default:

# default selectors for router and registry services
# openshift_router_selector='region=infra'
# openshift_registry_selector='region=infra'

The registry and router are only able to run on node hosts with the region=infra label. Ensure that at least one node host in your OpenShift Container Platform environment has the region=infra label. For example:

[nodes]
infra-node1.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}"

If there is not a node in the [nodes] section that matches the selector settings, the default router and registry will be deployed as failed with Pending status.

It is recommended for production environments that you maintain dedicated infrastructure nodes where the registry and router pods can run separately from pods used for user applications.

If you do not intend to use OpenShift Container Platform to manage the registry and router, configure the following Ansible settings:

openshift_hosted_manage_registry=false
openshift_hosted_manage_router=false

If you are using an image registry other than the default registry.access.redhat.com, you need to specify the desired registry in the /etc/ansible/hosts file.

As described in Configuring Schedulability on masters, master hosts are marked unschedulable by default. If you label a master host with region=infra and have no other dedicated infrastructure nodes, you must also explicitly mark these master hosts as schedulable. Otherwise, the registry and router pods cannot be placed anywhere:

[nodes]
master.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}" openshift_schedulable=true

Configuring Session Options

Session options in the OAuth configuration are configurable in the inventory file. By default, Ansible populates a sessionSecretsFile with generated authentication and encryption secrets so that sessions generated by one master can be decoded by the others. The default location is /etc/origin/master/session-secrets.yaml, and this file will only be re-created if deleted on all masters.

You can set the session name and maximum number of seconds with openshift_master_session_name and openshift_master_session_max_seconds:

openshift_master_session_name=ssn
openshift_master_session_max_seconds=3600

If provided, openshift_master_session_auth_secrets and openshift_master_encryption_secrets must be equal length.

For openshift_master_session_auth_secrets, used to authenticate sessions using HMAC, it is recommended to use secrets with 32 or 64 bytes:

openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']

For openshift_master_encryption_secrets, used to encrypt sessions, secrets must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256:

openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']

Configuring Custom Certificates

Custom serving certificates for the public host names of the OpenShift Container Platform API and web console can be deployed during an advanced installation and are configurable in the inventory file.

Custom certificates should only be configured for the host name associated with the publicmasterURL which can be set using openshift_master_cluster_public_hostname. Using a custom serving certificate for the host name associated with the masterURL (openshift_master_cluster_hostname) will result in TLS errors as infrastructure components will attempt to contact the master API using the internal masterURL host.

Certificate and key file paths can be configured using the openshift_master_named_certificates cluster variable:

openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key"}]

File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed within the /etc/origin/master/named_certificates/ directory.

Ansible detects a certificate’s Common Name and Subject Alternative Names. Detected names can be overridden by providing the "names" key when setting openshift_master_named_certificates:

openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"]}]

Certificates configured using openshift_master_named_certificates are cached on masters, meaning that each additional Ansible run with a different set of certificates results in all previously deployed certificates remaining in place on master hosts and within the master configuration file.

If you would like openshift_master_named_certificates to be overwritten with the provided value (or no value), specify the openshift_master_overwrite_named_certificates cluster variable:

openshift_master_overwrite_named_certificates=true

For a more complete example, consider the following cluster variables in an inventory file:

openshift_master_cluster_method=native
openshift_master_cluster_hostname=lb-internal.openshift.com
openshift_master_cluster_public_hostname=custom.openshift.com

To overwrite the certificates on a subsequent Ansible run, you could set the following:

openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key", "names": ["custom.openshift.com"]}]
openshift_master_overwrite_named_certificates=true

Configuring Certificate Validity

By default, the certificates used to govern the etcd, master, and kubelet expire after two to five years. The validity (length in days until they expire) for the auto-generated registry, CA, node, and master certificates can be configured during installation using the following variables (default values shown):

[OSEv3:vars]

openshift_hosted_registry_cert_expire_days=730
openshift_ca_cert_expire_days=1825
openshift_node_cert_expire_days=730
openshift_master_cert_expire_days=730
etcd_ca_default_days=1825

These values are also used when redeploying certificates via Ansible post-installation.

Configuring Cluster Metrics

Cluster metrics are not set to automatically deploy by default. Set the following to enable cluster metrics when using the advanced install:

[OSEv3:vars]

openshift_metrics_install_metrics=true

The OpenShift Container Platform web console uses the data coming from the Hawkular Metrics service to display its graphs. The metrics public URL can be set during cluster installation using the openshift_metrics_hawkular_hostname Ansible variable, which defaults to:

https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics

If you alter this variable, ensure the host name is accessible via your router.

In accordance with upstream Kubernetes rules, metrics can be collected only on the default interface of eth0.

You must set an openshift_master_default_subdomain value to deploy metrics.

Configuring Metrics Storage

The openshift_metrics_cassandra_storage_type variable must be set in order to use persistent storage for metrics. If openshift_metrics_cassandra_storage_type is not set, then cluster metrics data is stored in an emptyDir volume, which will be deleted when the Cassandra pod terminates.

There are three options for enabling cluster metrics storage when using the advanced install:

Option A: Dynamic

Use the following variable if your OpenShift Container Platform environment supports dynamic volume provisioning for your cloud provider:

[OSEv3:vars]

openshift_metrics_cassandra_storage_type=dynamic
Option B: NFS Host Group

The use of NFS for metrics storage is experimental and not supported in OpenShift Container Platform.

When the following variables are set, an NFS volume is created during an advanced install with path <nfs_directory>/<volume_name> on the host within the [nfs] host group. For example, the volume path using these options would be /exports/metrics:

[OSEv3:vars]

openshift_metrics_storage_kind=nfs
openshift_metrics_storage_access_modes=['ReadWriteOnce']
openshift_metrics_storage_nfs_directory=/exports
openshift_metrics_storage_nfs_options='*(rw,root_squash)'
openshift_metrics_storage_volume_name=metrics
openshift_metrics_storage_volume_size=10Gi
Option C: External NFS Host

The use of NFS for metrics storage is experimental and not supported in OpenShift Container Platform.

To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.

[OSEv3:vars]

openshift_metrics_storage_kind=nfs
openshift_metrics_storage_access_modes=['ReadWriteOnce']
openshift_metrics_storage_host=nfs.example.com
openshift_metrics_storage_nfs_directory=/exports
openshift_metrics_storage_volume_name=metrics
openshift_metrics_storage_volume_size=10Gi

The remote volume path using the following options would be nfs.example.com:/exports/metrics.

Configuring Cluster Logging

Cluster logging is not set to automatically deploy by default. Set the following to enable cluster logging when using the advanced installation method:

[OSEv3:vars]

openshift_logging_install_logging=true
openshift_hosted_logging_deployer_version=v3.6

Configuring Logging Storage

The openshift_logging_es_pvc_dynamic variable must be set in order to use persistent storage for logging. If openshift_logging_es_pvc_dynamic is not set, then cluster logging data is stored in an emptyDir volume, which will be deleted when the Elasticsearch pod terminates.

There are three options for enabling cluster logging storage when using the advanced install:

Option A: Dynamic

Use the following variable if your OpenShift Container Platform environment supports dynamic volume provisioning for your cloud provider:

[OSEv3:vars]

openshift_logging_es_pvc_dynamic=true
Option B: NFS Host Group

The use of NFS for logging storage is experimental and not supported in OpenShift Container Platform.

When the following variables are set, an NFS volume is created during an advanced install with path <nfs_directory>/<volume_name> on the host within the [nfs] host group. For example, the volume path using these options would be /exports/logging:

[OSEv3:vars]

openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_nfs_directory=/exports
openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi
Option C: External NFS Host

The use of NFS for logging storage is experimental and not supported in OpenShift Container Platform.

To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.

[OSEv3:vars]

openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_host=nfs.example.com
openshift_hosted_logging_storage_nfs_directory=/exports
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi

The remote volume path using the following options would be nfs.example.com:/exports/logging.

Enabling the Service Catalog

Enabling the service catalog is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

Enabling the service catalog allows service brokers to be registered with the catalog. The web console is also configured to enable an updated landing page for browsing the catalog.

To enable the service catalog, add the following in your inventory file’s [OSEv3:vars] section:

openshift_enable_service_catalog=true

When the service catalog is enabled, the web console shows the updated landing page but still uses the normal image stream and template behavior. The Ansible service broker is also enabled; see Configuring the Ansible Service Broker for more details. The template service broker (TSB) is not deployed by default; see Configuring the Template Service Broker for more information.

Configuring the Ansible Service Broker

Enabling the Ansible service broker is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

If you have enabled the service catalog, the Ansible service broker (ASB) is also enabled.

The ASB deploys its own etcd instance separate from the etcd used by the rest of the OpenShift Container Platform cluster. The ASB’s etcd instance requires separate storage using persistent volumes (PVs) to function. If no PV is available, etcd will wait until the PV can be satisfied. The ASB application will enter a CrashLoop state until its etcd instance is available.

The following example shows usage of an NFS host to provide the required PVs, but other persistent storage providers can be used instead.

Some Ansible playbook bundles (APBs) may also require a PV for their own usage. Two APBs are currently provided with OpenShift Container Platform 3.6: MediaWiki and PostgreSQL. Both of these require their own PV to deploy.

To configure the ASB:

  1. In your inventory file, add nfs to the [OSEv3:children] section to enable the [nfs] group:

    [OSEv3:children]
    masters
    nodes
    nfs
  2. Add a [nfs] group section and add the host name for the system that will be the NFS host:

    [nfs]
    master1.example.com
  3. In addition to the settings from Enabling the Service Catalog, add the following in the [OSEv3:vars] section:

    openshift_hosted_etcd_storage_kind=nfs
    openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)"
    openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd (1)
    openshift_hosted_etcd_storage_volume_name=etcd-vol2 (1)
    openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"]
    openshift_hosted_etcd_storage_volume_size=1G
    openshift_hosted_etcd_storage_labels={'storage': 'etcd'}
    1 An NFS volume will be created with path <nfs_directory>/<volume_name> on the host within the [nfs] group. For example, the volume path using these options would be /opt/osev3-etcd/etcd-vol2.

    These settings create a persistent volume that is attached to the ASB’s etcd instance during cluster installation.

Configuring the Template Service Broker

Enabling the template service broker is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

If you have enabled the service catalog, you can also enable the template service broker (TSB).

To configure the TSB:

  1. One or more projects must be defined as the broker’s source namespace(s) for loading templates and image streams into the service catalog. You can also set the nodeselector. Set the desired projects by modifying the following in your inventory file’s [OSEv3:vars] section:

    openshift_template_service_broker_namespaces=['openshift','myproject']
    template_service_broker_selector={"node": "true"}
  2. The installer currently does not automate installation of the TSB, so additional steps must be run manually after the cluster installation has completed. Continue with the rest of the preparation of your inventory file, then see Running the Advanced Installation for the additional steps to deploy the TSB.

Configuring Web Console Customization

The following Ansible variables set master configuration options for customizing the web console. See Customizing the Web Console for more details on these customization options.

Table 9. Web Console Customization Variables
Variable Purpose

openshift_master_logout_url

Sets logoutURL in the master configuration. See Changing the Logout URL for details. Example value: http://example.com

openshift_master_extension_scripts

Sets extensionScripts in the master configuration. See Loading Extension Scripts and Stylesheets for details. Example value: ['/path/to/script1.js','/path/to/script2.js']

openshift_master_extension_stylesheets

Sets extensionStylesheets in the master configuration. See Loading Extension Scripts and Stylesheets for details. Example value: ['/path/to/stylesheet1.css','/path/to/stylesheet2.css']

openshift_master_extensions

Sets extensions in the master configuration. See Serving Static Files and Customizing the About Page for details. Example value: [{'name': 'images', 'sourceDirectory': '/path/to/my_images'}]

openshift_master_oauth_template

Sets the OAuth template in the master configuration. See Customizing the Login Page for details. Example value: ['/path/to/login-template.html']

openshift_master_metrics_public_url

Sets metricsPublicURL in the master configuration. See Setting the Metrics Public URL for details. Example value: https://hawkular-metrics.example.com/hawkular/metrics

openshift_master_logging_public_url

Sets loggingPublicURL in the master configuration. See Kibana for details. Example value: https://kibana.example.com

Example Inventory Files

Single master Examples

You can configure an environment with a single master and multiple nodes, and either a single or multiple number of external etcd hosts.

Moving from a single master cluster to multiple masters after installation is not supported.

Single master and Multiple Nodes

The following table describes an example environment for a single master (with etcd on the same host) and two nodes:

Host Name Infrastructure Component to Install

master.example.com

master and node

master.example.com

etcd

node1.example.com

Node

node2.example.com

You can see these example hosts present in the [masters] and [nodes] sections of the following example inventory file:

Single master and Multiple Nodes Inventory File
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root

# If ansible_ssh_user is not root, ansible_become must be set to true
#ansible_become=true

openshift_deployment_type=openshift-enterprise

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# host group for masters
[masters]
master.example.com

# host group for etcd
[etcd]
master.example.com

# host group for nodes, includes region info
[nodes]
master.example.com
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

Single master, Multiple etcd, and Multiple Nodes

The following table describes an example environment for a single master, three etcd hosts, and two nodes:

Host Name Infrastructure Component to Install

master.example.com

master and node

etcd1.example.com

etcd

etcd2.example.com

etcd3.example.com

node1.example.com

Node

node2.example.com

When specifying multiple etcd hosts, stand-alone etcd (non-embedded) is installed and configured. Clustering of OpenShift Container Platform’s embedded etcd is not supported. Stand-alone etcd can also be collocated on master hosts, if desired.

You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:

Single master, Multiple etcd, and Multiple Nodes Inventory File
# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=openshift-enterprise

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# host group for masters
[masters]
master.example.com

# host group for etcd
[etcd]
etcd1.example.com
etcd2.example.com
etcd3.example.com

# host group for nodes, includes region info
[nodes]
master.example.com
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

Multiple masters Examples

You can configure an environment with multiple masters, multiple etcd hosts, and multiple nodes. Configuring multiple masters for high availability (HA) ensures that the cluster has no single point of failure.

Moving from a single master cluster to multiple masters after installation is not supported.

When configuring multiple masters, the advanced installation supports the following high availability (HA) method:

native

Leverages the native HA master capabilities built into OpenShift Container Platform and can be combined with any load balancing solution. If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed you have pre-configured an external load balancing solution of your choice to balance the master API (port 8443) on all master hosts.

This HAProxy load balancer is intended to demonstrate the API server’s HA mode and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer.

For an external load balancing solution, you must have:

  • A pre-created load balancer VIP configured for SSL passthrough.

  • A VIP listening on the port specified by the openshift_master_api_port and openshift_master_console_port values (8443 by default) and proxying back to all master hosts on that port.

  • A domain name for VIP registered in DNS.

    • The domain name will become the value of both openshift_master_cluster_public_hostname and openshift_master_cluster_hostname in the OpenShift Container Platform installer.

This HAProxy load balancer is intended to demonstrate the API server’s HA mode and is not recommended for production environments. If you are deploying to a cloud provider we recommend that you deploy a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer.

See External Load Balancer Integrations for more information.

For more on the high availability master architecture, see Kubernetes Infrastructure.

Note the following when using the native HA method:

  • The advanced installation method does not currently support multiple HAProxy load balancers in an active-passive setup. See the Load Balancer Administration documentation for post-installation amendments.

  • In a HAProxy setup, controller manager servers run as standalone processes. They elect their active leader with a lease stored in etcd. The lease expires after 30 seconds by default. If a failure happens on an active controller server, it will take up to this number of seconds to elect another leader. The interval can be configured with the osm_controller_lease_ttl variable.

To configure multiple masters, refer to the following section.

Multiple masters with Multiple etcd

The following describes an example environment for three masters, one HAProxy load balancer, three etcd hosts, and two nodes using the native HA method:

Host Name Infrastructure Component to Install

master1.example.com

master (clustered using native HA) and node

master2.example.com

master3.example.com

lb.example.com

HAProxy to load balance API master endpoints

etcd1.example.com

etcd

etcd2.example.com

etcd3.example.com

node1.example.com

Node

node2.example.com

When specifying multiple etcd hosts, stand-alone etcd (non-embedded) is installed and configured. Clustering of OpenShift Container Platform’s embedded etcd is not supported. Stand-alone etcd can also be collocated on master hosts, if desired.

You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:

Example 1. Multiple masters Using HAProxy Inventory File
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=openshift-enterprise

# Uncomment the following to enable htpasswd authentication; defaults to
# DenyAllPasswordIdentityProvider.
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# Native high availbility cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

# apply updated node defaults
openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}

# override the default controller lease ttl
#osm_controller_lease_ttl=30

# enable ntp on masters to ensure proper failover
openshift_clock_enabled=true

# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com

# host group for etcd
[etcd]
etcd1.example.com
etcd2.example.com
etcd3.example.com

# Specify load balancer host
[lb]
lb.example.com

# host group for nodes, includes region info
[nodes]
master[1:3].example.com
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

Multiple masters with master and etcd on the Same Host

The following describes an example environment for three masters with etcd on each host, one HAProxy load balancer, and two nodes using the native HA method:

Host Name Infrastructure Component to Install

master1.example.com

master (clustered using native HA) and node with etcd on each host

master2.example.com

master3.example.com

lb.example.com

HAProxy to load balance API master endpoints

node1.example.com

Node

node2.example.com

You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:

# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=openshift-enterprise

# Uncomment the following to enable htpasswd authentication; defaults to
# DenyAllPasswordIdentityProvider.
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

# override the default controller lease ttl
#osm_controller_lease_ttl=30

# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com

# host group for etcd
[etcd]
master1.example.com
master2.example.com
master3.example.com

# Specify load balancer host
[lb]
lb.example.com

# host group for nodes, includes region info
[nodes]
master[1:3].example.com
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

Running the Advanced Installation

After you have configured Ansible by defining an inventory file in /etc/ansible/hosts, you run the advanced installation playbook via Ansible. OpenShift Container Platform installations are currently supported using the RPM-based installer, while the containerized installer is currently a Technology Preview feature.

Due to a known issue, after running the installation, if NFS volumes are provisioned for any component, the following directories might be created whether their components are being deployed to NFS volumes or not:

  • /exports/logging-es

  • /exports/logging-es-ops/

  • /exports/metrics/

  • /exports/prometheus

  • /exports/prometheus-alertbuffer/

  • /exports/prometheus-alertmanager/

You can delete these directories after installation, as needed.

Running the RPM-based Installer

The RPM-based installer uses Ansible installed via RPM packages to run playbooks and configuration files available on the local host. To run the installer, use the following command, specifying -i if your inventory file located somewhere other than /etc/ansible/hosts:

Do not run OpenShift Ansible playbooks under nohup. Using nohup with the playbooks causes file descriptors to be created and not closed. Therefore, the system can run out of files to open and the playbook will fail.

# ansible-playbook  [-i /path/to/inventory] \
    /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.

The installer caches playbook configuration values for 10 minutes, by default. If you change any system, network, or inventory configuration, and then re-run the installer within that 10 minute period, the new values are not used, and the previous values are used instead. You can delete the contents of the cache, which is defined by the fact_caching_connection value in the /etc/ansible/ansible.cfg file. An example of this file is shown in Recommended Installation Practices.

Running the Containerized Installer

The openshift3/ose-ansible image is a containerized version of the OpenShift Container Platform installer. This installer image provides the same functionality as the RPM-based installer, but it runs in a containerized environment that provides all of its dependencies rather than being installed directly on the host. The only requirement to use it is the ability to run a container.

Running the Installer as a System Container

All system container components are Technology Preview features in OpenShift Container Platform 3.6. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.6. During this phase, they are only meant for use with new cluster installations in non-production environments.

The installer image can be used as a system container. System containers are stored and run outside of the traditional docker service. This enables running the installer image from one of the target hosts without concern for the install restarting docker on the host.

  1. As the root user, use the Atomic CLI to run the installer as a run-once system container:

    # atomic install --system \
        --storage=ostree \
        --set INVENTORY_FILE=/path/to/inventory \ (1)
        registry.access.redhat.com/openshift3/ose-ansible:v3.6
    1 Specify the location on the local host for your inventory file.

    This command initiates the cluster installation by using the inventory file specified and the root user’s SSH configuration. It logs the output on the terminal and also saves it in the /var/log/ansible.log file. The first time this command is run, the image is imported into OSTree storage (system containers use this rather than docker daemon storage). On subsequent runs, it reuses the stored image.

    If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.

Running Other Playbooks

You can use the PLAYBOOK_FILE environment variable to specify other playbooks you want to run by using the containerized installer. The default value of the PLAYBOOK_FILE is /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml, which is the main cluster installation playbook, but you can set it to the path of another playbook inside the container.

For example, to run the pre-install checks playbook before installation, use the following command:

# atomic install --system \
    --storage=ostree \
    --set INVENTORY_FILE=/path/to/inventory \
    --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \ (1)
    --set OPTS="-v" \ (2)
    registry.access.redhat.com/openshift3/ose-ansible:v3.6
1 Set PLAYBOOK_FILE to the full path of the playbook starting at the playbooks/ directory. Playbooks are located in the same locations as with the RPM-based installer.
2 Set OPTS to add command line options to ansible-playbook.

Running the Installer as a Docker Container

The installer image can also run as a docker container anywhere that docker can run.

This method must not be used to run the installer on one of the hosts being configured, as the install may restart docker on the host, disrupting the installer container execution.

Although this method and the system container method above use the same image, they run with different entry points and contexts, so runtime parameters are not the same.

At a minimum, when running the installer as a docker container you must provide:

  • SSH key(s), so that Ansible can reach your hosts.

  • An Ansible inventory file.

  • The location of the Ansible playbook to run against that inventory.

Here is an example of how to run an install via docker. Note that this must be run by a non-root user with access to docker.

$ docker run -t -u `id -u` \ (1)
    -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ (2)
    -v $HOME/ansible/hosts:/tmp/inventory:Z \ (3)
    -e INVENTORY_FILE=/tmp/inventory \ (3)
    -e PLAYBOOK_FILE=playbooks/byo/config.yml \ (4)
    -e OPTS="-v" \ (5)
    registry.access.redhat.com/openshift3/ose-ansible:v3.6
1 -u `id -u` makes the container run with the same UID as the current user, which allows that user to use the SSH key inside the container (SSH private keys are expected to be readable only by their owner).
2 -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z mounts your SSH key ($HOME/.ssh/id_rsa) under the container user’s $HOME/.ssh (/opt/app-root/src is the $HOME of the user in the container). If you mount the SSH key into a non-standard location you can add an environment variable with -e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point or set ansible_ssh_private_key_file=/the/mount/point as a variable in the inventory to point Ansible at it.

Note that the SSH key is mounted with the :Z flag. This is required so that the container can read the SSH key under its restricted SELinux context. This also means that your original SSH key file will be re-labeled to something like system_u:object_r:container_file_t:s0:c113,c247. For more details about :Z, check the docker-run(1) man page. Keep this in mind when providing these volume mount specifications because this might have unexpected consequences: for example, if you mount (and therefore re-label) your whole $HOME/.ssh directory it will block the host’s sshd from accessing your public keys to login. For this reason you may want to use a separate copy of the SSH key (or directory), so that the original file labels remain untouched.

3 -v $HOME/ansible/hosts:/tmp/inventory:Z and -e INVENTORY_FILE=/tmp/inventory mount a static Ansible inventory file into the container as /tmp/inventory and set the corresponding environment variable to point at it. As with the SSH key, the inventory file SELinux labels may need to be relabeled by using the :Z flag to allow reading in the container, depending on the existing label (for files in a user $HOME directory this is likely to be needed). So again you may prefer to copy the inventory to a dedicated location before mounting it.

The inventory file can also be downloaded from a web server if you specify the INVENTORY_URL environment variable, or generated dynamically using DYNAMIC_SCRIPT_URL to specify an executable script that provides a dynamic inventory.

4 -e PLAYBOOK_FILE=playbooks/byo/config.yml specifies the playbook to run (in this example, the BYO installer) as a relative path from the top level directory of openshift-ansible content. The full path from the RPM can also be used, as well as the path to any other playbook file in the container.
5 -e OPTS="-v" supplies arbitrary command line options (in this case, -v to increase verbosity) to the ansible-playbook command that runs inside the container.

Deploying the Template Service Broker

If you have enabled the service catalog and want to deploy the template service broker (TSB), run the following manual steps after the cluster installation completes successfully:

The template service broker is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

Enabling the TSB currently requires opening unauthenticated access to the cluster; this security issue will be resolved before exiting the Technology Preview phase.

  1. Ensure that one or more source projects for the TSB were defined via openshift_template_service_broker_namespaces as described in Configuring the Template Service Broker.

  2. Run the following command to enable unauthenticated access for the TSB:

    $ oc adm policy add-cluster-role-to-group \
        system:openshift:templateservicebroker-client \
        system:unauthenticated system:authenticated
  3. Create a template-broker.yml file with the following contents:

    apiVersion: servicecatalog.k8s.io/v1alpha1
    kind: Broker
    metadata:
      name: template-broker
    spec:
      url: https://kubernetes.default.svc:443/brokers/template.openshift.io
  4. Use the file to register the broker:

    $ oc create -f template-broker.yml
  5. Enable the Technology Preview feature in the web console to use the TSB instead of the standard openshift global library behavior.

    1. Save the following script to a file (for example, tech-preview.js):

      window.OPENSHIFT_CONSTANTS.ENABLE_TECH_PREVIEW_FEATURE.template_service_broker = true;
    2. Add the file to the master configuration file in /etc/origin/master/master-config.yml:

      assetConfig:
        ...
        extensionScripts:
          - /path/to/tech-preview.js
    3. Restart the master service:

      # systemctl restart atomic-openshift-master

Verifying the Installation

After the installation completes:

  1. Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:

    # oc get nodes
    
    NAME                        STATUS                     AGE
    master.example.com          Ready,SchedulingDisabled   165d
    node1.example.com           Ready                      165d
    node2.example.com           Ready                      165d
  2. To verify that the web console is installed correctly, use the master host name and the web console port number to access the web console with a web browser.

    For example, for a master host with a host name of master.openshift.com and using the default port of 8443, the web console would be found at https://master.openshift.com:8443/console.

The default port for the console is 8443. If this was changed during the installation, the port can be found at openshift_master_console_port in the /etc/ansible/hosts file.

Verifying Multiple etcd Hosts

If you installed multiple etcd hosts:

  1. First, verify that the etcd package, which provides the etcdctl command, is installed:

    # yum install etcd
  2. On a master host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:

    # etcdctl -C \
        https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \
        --ca-file=/etc/origin/master/master.etcd-ca.crt \
        --cert-file=/etc/origin/master/master.etcd-client.crt \
        --key-file=/etc/origin/master/master.etcd-client.key cluster-health
  3. Also verify the member list is correct:

    # etcdctl -C \
        https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \
        --ca-file=/etc/origin/master/master.etcd-ca.crt \
        --cert-file=/etc/origin/master/master.etcd-client.crt \
        --key-file=/etc/origin/master/master.etcd-client.key member list

Verifying Multiple masters Using HAProxy

If you installed multiple masters using HAProxy as a load balancer, browse to the following URL according to your [lb] section definition and check HAProxy’s status:

http://<lb_hostname>:9000

You can verify your installation by consulting the HAProxy Configuration documentation.

Optionally Securing Builds

Running docker build is a privileged process, so the container has more access to the node than might be considered acceptable in some multi-tenant environments. If you do not trust your users, you can use a more secure option at the time of installation. Disable Docker builds on the cluster and require that users build images outside of the cluster. See Securing Builds by Strategy for more information on this optional process.

Uninstalling OpenShift Container Platform

You can uninstall OpenShift Container Platform hosts in your cluster by running the uninstall.yml playbook. This playbook deletes OpenShift Container Platform content installed by Ansible, including:

  • Configuration

  • Containers

  • Default templates and image streams

  • Images

  • RPM packages

The playbook will delete content for any hosts defined in the inventory file that you specify when running the playbook.

Before you uninstall your cluster, review the following list of scenarios and make sure that uninstalling is the best option:

  • If your installation process failed and you want to continue the process, you can retry the installation. The installation playbooks are designed so that if they fail to install your cluster, you can run them again without needing to uninstall the cluster.

  • If you want to restart a failed installation from the beginning, you can uninstall the OpenShift Container Platform hosts in your cluster by running the uninstall.yml playbook, as described in the following section. This playbook only uninstalls the OpenShift Container Platform assets for the most recent version that you installed.

  • If you must change the host names or certificate names, you must recreate your certificates before retrying installation by running the uninstall.yml playbook. Running the installation playbooks again will not recreate the certificates.

  • If you want to repurpose hosts that you installed OpenShift Container Platform on earlier, such as with a proof-of-concept installation, or want to install a different minor or asynchronous version of OpenShift Container Platform you must reimage the hosts before you use them in a production cluster. After you run the uninstall.yml playbooks, some host assets might remain in an altered state.

If you want to uninstall OpenShift Container Platform across all hosts in your cluster, run the playbook using the inventory file you used when installing OpenShift Container Platform initially or ran most recently:

# ansible-playbook [-i /path/to/file] \
    /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml

Uninstalling Nodes

You can also uninstall node components from specific hosts using the uninstall.yml playbook while leaving the remaining hosts and cluster alone:

This method should only be used when attempting to uninstall specific node hosts and not for specific masters or etcd hosts, which would require further configuration changes within the cluster.

  1. First follow the steps in Deleting Nodes to remove the node object from the cluster, then continue with the remaining steps in this procedure.

  2. Create a different inventory file that only references those hosts. For example, to only delete content from one node:

    [OSEv3:children]
    nodes (1)
    
    [OSEv3:vars]
    ansible_ssh_user=root
    openshift_deployment_type=openshift-enterprise
    
    [nodes]
    node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" (2)
    1 Only include the sections that pertain to the hosts you are interested in uninstalling.
    2 Only include hosts that you want to uninstall.
  3. Specify that new inventory file using the -i option when running the uninstall.yml playbook:

    # ansible-playbook -i /path/to/new/file \
        /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml

When the playbook completes, all OpenShift Container Platform content should be removed from any specified hosts.

Known Issues

  • On failover in multiple master clusters, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See https://github.com/kubernetes/kubernetes/issues/10030 for details.

  • On failure of the Ansible installer, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are using bare metal machines, see Uninstalling OpenShift Container Platform for instructions.

What’s Next?

Now that you have a working OpenShift Container Platform instance, you can: