This is a cache of https://docs.openshift.com/enterprise/3.2/install_config/install/prerequisites.html. It is a snapshot of the page at 2024-11-24T04:21:52.888+0000.
Prerequisites - Installing | Installation and Configuration | OpenShift Enterprise 3.2
×

Overview

OpenShift Enterprise infrastructure components can be installed across multiple hosts. The following sections outline the system requirements and instructions for preparing your environment and hosts before installing OpenShift Enterprise.

Planning

For production environments, several factors that can influence installation must be considered prior to deployment:

  • What is the number of required hosts required to run the cluster?

  • How many pods are required in your cluster?

  • Is high availability required? High availability is recommended for fault tolerance.

  • Which installation type do you want to use: RPM or containerized?

System Requirements

You must have an active OpenShift Enterprise subscription on your Red Hat account to proceed. If you do not, contact your sales representative for more information.

OpenShift Enterprise 3.2 requires Docker 1.9.1, and supports Docker 1.10 as of OpenShift Enterprise 3.2.1.

The system requirements vary per host type:

masters

  • Physical or virtual system, or an instance running on a public or private IaaS.

  • Base OS: RHEL 7.1 or later with "Minimal" installation option, or RHEL Atomic Host 7.2.4 or later.

  • 2 vCPU.

  • Minimum 8 GB RAM.

  • Minimum 30 GB hard disk space for the file system containing /var/.

Nodes

  • Physical or virtual system, or an instance running on a public or private IaaS.

  • Base OS: RHEL 7.1 or later with "Minimal" installation option, or RHEL Atomic Host 7.2.4 or later.

  • NetworkManager 1.0 or later

  • 1 vCPU.

  • Minimum 8 GB RAM.

  • Minimum 15 GB hard disk space for the file system containing /var/.

  • An additional minimum 15 GB unallocated space to be used for Docker’s storage back end; see Configuring Docker Storage below.

OpenShift Enterprise only supports servers with x86_64 architecture.

Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage in Red Hat Enterprise Linux Atomic Host for instructions on configuring this during or after installation.

Host Recommendations

The following apply to production environments. Test or sample environments will function with the minimum requirements.

master Hosts

In a highly available OpenShift Enterprise cluster with external etcd, a master host should have 1 CPU core and 1.5 GB of memory, on top of the defaults in the table above, for each 1000 pods. Therefore, the recommended size of master host in an OpenShift Enterprise cluster of 2000 pods would be 2 CPU cores and 5 GB of RAM, in addition to the minimum requirements for a master host of 2 CPU cores and 8 GB of RAM.

When planning an environment with multiple masters, a minimum of three etcd hosts as well as a load-balancer between the master hosts, is required.

Node Hosts

The size of a node host depends on the expected size of its workload. As an OpenShift Enterprise cluster administrator, you will need to calculate the expected workload, then add about 10% for overhead. For production environments, allocate enough resources so that node host failure does not affect your maximum capacity.

Use the above with the following table to plan the maximum loads for nodes and pods:

Host Sizing Recommendation

Maximum nodes per cluster

300

Maximum pods per nodes

110

Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.

Configuring Core Usage

By default, OpenShift Enterprise masters and nodes use all available cores in the system they run on. You can choose the number of cores you want OpenShift Enterprise to use by setting the GOMAXPROCS environment variable.

For example, run the following before starting the server to make OpenShift Enterprise only run on one core:

# export GOMAXPROCS=1

Security Warning

OpenShift Enterprise runs containers on your hosts, and in some cases, such as build operations and the registry service, it does so using privileged containers. Furthermore, those containers access your host’s Docker daemon and perform docker build and docker push operations. As such, you should be aware of the inherent security risks associated with performing docker run operations on arbitrary images as they effectively have root access.

For more information, see these articles:

To address these risks, OpenShift Enterprise uses security context constraints that control the actions that pods can perform and what it has the ability to access.

Environment Requirements

The following must be set up in your environment before OpenShift Enterprise can be installed.

DNS

A fully functional DNS environment is a requirement for OpenShift Enterprise to work correctly. Adding entries into the /etc/hosts file is not enough, because that file is not copied into containers running on the platform.

To configure the OpenShift Enterprise DNS environment:

Key components of OpenShift Enterprise run themselves inside of containers. By default, these containers receive their /etc/resolv.conf DNS configuration file from their host. OpenShift Enterprise then inserts one DNS value into the pods (above the node’s nameserver values). That value is defined in the /etc/origin/node/node-config.yaml file by the dnsIP parameter, which by default is set to the address of the host node because the host is using dnsmasq. If the dnsIP parameter is omitted from the node-config.yaml file, then the value defaults to the kubernetes service IP, which is the first nameserver in the pod’s /etc/resolv.conf file.

As of OpenShift Enterprise 3.2, dnsmasq is automatically configured on all masters and nodes. The pods use the nodes as their DNS, and the nodes forward the requests. By default, dnsmasq is configured on the nodes to listen on port 53, therefore the nodes cannot run any other type of DNS application.

Previously, in OpenShift Enterprise 3.1, a DNS server could not be installed on a master node, because it ran its own internal DNS server. Now, with master nodes using dnsmasq, SkyDNS is now configured to listen on port 8053 so that dnsmasq can run on the masters. Note that these DNS changes (dnsmasq configured on nodes and the SkyDNS port change) only apply to new installations of OpenShift Enterprise 3.2. Clusters upgraded to OpenShift Enterprise 3.2 from a previous version do not currently have these changes applied during the upgrade process.

NetworkManager is required on the nodes in order to populate dnsmasq with the DNS IP addresses.

If you do not have a properly functioning DNS environment, you could experience failure with:

  • Product installation via the reference Ansible-based scripts

  • Deployment of the infrastructure containers (registry, routers)

  • Access to the OpenShift Enterprise web console, because it is not accessible via IP address alone

Configuring a DNS Environment

To properly configure your DNS environment for OpenShift Enterprise:

  1. Check the contents of /etc/resolv.conf:

    $ cat /etc/resolv.conf
    # Generated by NetworkManager
    search ose3.example.com
    nameserver 10.64.33.1
    # nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
  2. Ensure that the DNS servers listed in /etc/resolv.conf are able to resolve to the addresses of all the masters and nodes in your OpenShift Enterprise environment:

    $ dig <node_hostname> @<IP_address> +short

    For example:

    $ dig node1.ose3.example.com @10.64.33.1 +short
    10.64.33.156
    $ dig master.ose3.example.com @10.64.33.1 +short
    10.64.33.37
  3. If DHCP is:

    • Disabled, then configure your network interface to be static, and add DNS nameservers to NetworkManager.

    • Enabled, then the NetworkManager dispatch script automatically configures DNS based on the DHCP configuration. Optionally, you can add a value to dnsIP in the node-config.yaml file to prepend the pod’s resolv.conf file. The second nameserver is then defined by the host’s first nameserver. By default, this will be the IP address of the node host.

      For most configurations, do not set the openshift_dns_ip option during the advanced installation of OpenShift Enterprise (using Ansible), because this option overrides the default IP address set by dnsIP.

      Instead, allow the installer to configure each node to use dnsmasq and forward requests to SkyDNS or the external DNS provider. If you do set the openshift_dns_ip option, then it should be set either with a DNS IP that queries SkyDNS first, or to the SkyDNS service or endpoint IP (the Kubernetes service IP).

Disabling dnsmasq

If you want to disable dnsmasq (for example, if your /etc/resolv.conf is managed by a configuration tool other than NetworkManager), then set openshift_use_dnsmasq to false in the Ansible playbook.

However, certain containers do not properly move to the next nameserver when the first issues SERVFAIL. Red Hat Enterprise Linux (RHEL)-based containers do not suffer from this, but certain versions of uclibc and musl do.

Configuring Wildcard DNS

Optionally, configure a wildcard for the router to use, so that you do not need to update your DNS configuration when new routes are added.

A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift Enterprise router.

For example, create a wildcard DNS entry for cloudapps that has a low time-to-live value (TTL) and points to the public IP address of the host where the router will be deployed:

*.cloudapps.example.com. 300 IN  A 192.168.133.2

In almost all cases, when referencing VMs you must use host names, and the host names that you use must match the output of the hostname -f command on each node.

In your /etc/resolv.conf file on each node host, ensure that the DNS server that has the wildcard entry is not listed as a nameserver or that the wildcard domain is not listed in the search list. Otherwise, containers managed by OpenShift Enterprise may fail to resolve host names properly.

Running Diagnostics

To explore your DNS setup and run specific DNS queries, you can use the host and dig commands (part of the bind-utils package). For example, you can query a specific DNS server, or check if recursion is involved.

$ host `hostname`
ose3-master.example.com has address 172.16.25.41

$ dig ose3-node1.example.com  +short
172.16.25.45

Network Access

A shared network must exist between the master and node hosts. If you plan to configure multiple masters for high-availability using the advanced installation method, you must also select an IP to be configured as your virtual IP (VIP) during the installation process. The IP that you select must be routable between all of your nodes, and if you configure using a FQDN it should resolve on all nodes.

NetworkManager

NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required.

Required Ports

OpenShift Enterprise infrastructure components communicate with each other using ports, which are communication endpoints that are identifiable for specific processes or services. Ensure the following ports required by OpenShift Enterprise are open between hosts, for example if you have a firewall in your environment. Some ports are optional depending on your configuration and usage.

Table 1. Node to Node

4789

UDP

Required for SDN communication between pods on separate hosts.

Table 2. Nodes to master

53 or 8053

TCP/UDP

Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured.

4789

UDP

Required for SDN communication between pods on separate hosts.

443 or 8443

TCP

Required for node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on.

Table 3. master to Node

4789

UDP

Required for SDN communication between pods on separate hosts.

10250

TCP

The master proxies to node hosts via the Kubelet for oc commands.

In the following table, (L) indicates the marked port is also used in loopback mode, enabling the master to communicate with itself.

In a single-master cluster:

  • Ports marked with (L) must be open.

  • Ports not marked with (L) need not be open.

In a multiple-master cluster, all the listed ports must be open.

Table 4. master to master

53 (L) or 8053 (L)

TCP/UDP

Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured.

2049 (L)

TCP/UDP

Required when provisioning an NFS host as part of the installer.

2379

TCP

Used for standalone etcd (clustered) to accept changes in state.

2380

TCP

etcd requires this port be open between masters for leader election and peering connections when using standalone etcd (clustered).

4001 (L)

TCP

Used for embedded etcd (non-clustered) to accept changes in state.

4789 (L)

UDP

Required for SDN communication between pods on separate hosts.

Table 5. External to Load Balancer

9000

TCP

If you choose the native HA method, optional to allow access to the HAProxy statistics page.

Table 6. External to master

443 or 8443

TCP

Required for node hosts to communicate to the master API, for node hosts to post back status, to receive tasks, and so on.

Table 7. IaaS Deployments

22

TCP

Required for SSH by the installer or system administrator.

53 or 8053

TCP/UDP

Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. Only required to be internally open on master hosts.

80 or 443

TCP

For HTTP/HTTPS use for the router. Required to be externally open on node hosts, especially on nodes running the router.

1936

TCP

For router statistics use. Required to be open when running the template router to access statistics, and can be open externally or internally to connections depending on if you want the statistics to be expressed publicly.

4001

TCP

For embedded etcd (non-clustered) use. Only required to be internally open on the master host. 4001 is for server-client connections.

2379 and 2380

TCP

For standalone etcd use. Only required to be internally open on the master host. 2379 is for server-client connections. 2380 is for server-server connections, and is only required if you have clustered etcd.

4789

UDP

For VxLAN use (OpenShift Enterprise SDN). Required only internally on node hosts.

8443

TCP

For use by the OpenShift Enterprise web console, shared with the API server.

10250

TCP

For use by the Kubelet. Required to be externally open on nodes.

Notes

  • In the above examples, port 4789 is used for User Datagram Protocol (UDP).

  • When deployments are using the SDN, the pod network is accessed via a service proxy, unless it is accessing the registry from the same node the registry is deployed on.

  • OpenShift Enterprise internal DNS cannot be received over SDN. Depending on the detected values of openshift_facts, or if the openshift_ip and openshift_public_ip values are overridden, it will be the computed value of openshift_ip. For non-cloud deployments, this will default to the IP address associated with the default route on the master host. For cloud deployments, it will default to the IP address associated with the first internal interface as defined by the cloud metadata.

  • The master host uses port 10250 to reach the nodes and does not go over SDN. It depends on the target host of the deployment and uses the computed values of openshift_hostname and openshift_public_hostname.

Table 8. Aggregated Logging

9200

TCP

For Elasticsearch API use. Required to be internally open on any infrastructure nodes so Kibana is able to retrieve logs for display. It can be externally opened for direct access to Elasticsearch by means of a route. The route can be created using oc expose.

9300

TCP

For Elasticsearch inter-cluster use. Required to be internally open on any infrastructure node so the members of the Elasticsearch cluster may communicate with each other.

Git Access

You must have either Internet access and a GitHub account, or read and write access to an internal, HTTP-based Git server

Persistent Storage

The Kubernetes persistent volume framework allows you to provision an OpenShift Enterprise cluster with persistent storage using networked storage available in your environment. This can be done after completing the initial OpenShift Enterprise installation depending on your application needs, giving users a way to request those resources without having any knowledge of the underlying infrastructure.

The Installation and Configuration Guide provides instructions for cluster administrators on provisioning an OpenShift Enterprise cluster with persistent storage using NFS, GlusterFS, Ceph RBD, OpenStack Cinder, AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI.

SELinux

Security-Enhanced Linux (SELinux) must be enabled on all of the servers before installing OpenShift Enterprise or the installer will fail. Also, configure SELINUXTYPE=targeted in the /etc/selinux/config file:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Cloud Provider Considerations

Set up the Security Group

When installing on AWS or OpenStack, ensure that you set up the appropriate security groups. These are some ports that you should have in your security groups, without which the installation will fail. You may need more depending on the cluster configuration you want to install. For more information and to adjust your security groups accordingly, see Required Ports for more information.

All OpenShift Enterprise Hosts

  • tcp/22 from host running the installer/Ansible

etcd Security Group

  • tcp/2379 from masters

  • tcp/2380 from etcd hosts

master Security Group

  • tcp/8443 from 0.0.0.0/0

  • tcp/53 from all OpenShift Enterprise hosts for environments installed prior to or upgraded to 3.2

  • udp/53 from all OpenShift Enterprise hosts for environments installed prior to or upgraded to 3.2

  • tcp/8053 from all OpenShift Enterprise hosts for new environments installed with 3.2

  • udp/8053 from all OpenShift Enterprise hosts for new environments installed with 3.2

Node Security Group

  • tcp/10250 from masters

  • udp/4789 from nodes

Infrastructure Nodes (ones that can host the OpenShift Enterprise router)

  • tcp/443 from 0.0.0.0/0

  • tcp/80 from 0.0.0.0/0

If configuring ELBs for load balancing the masters and/or routers, you also need to configure Ingress and Egress security groups for the ELBs appropriately.

Override Detected IP Addresses and Host Names

Some deployments require that the user override the detected host names and IP addresses for the hosts. To see the default values, run the openshift_facts playbook:

# ansible-playbook playbooks/byo/openshift_facts.yml

Now, verify the detected common settings. If they are not what you expect them to be, you can override them.

The Advanced Installation topic discusses the available Ansible variables in greater detail.

Variable Usage

hostname

  • Should resolve to the internal IP from the instances themselves.

  • openshift_hostname overrides.

ip

  • Should be the internal IP of the instance.

  • openshift_ip will overrides.

public_hostname

  • Should resolve to the external IP from hosts outside of the cloud.

  • Provider openshift_public_hostname overrides.

public_ip

  • Should be the externally accessible IP associated with the instance.

  • openshift_public_ip overrides.

use_openshift_sdn

  • Should be true unless the cloud is GCE.

  • openshift_use_openshift_sdn overrides.

If openshift_hostname is set to a value other than the metadata-provided private-dns-name value, the native cloud integration for those providers will no longer work.

In AWS, situations that require overriding the variables include:

Variable Usage

hostname

The user is installing in a VPC that is not configured for both DNS hostnames and DNS resolution.

ip

Possibly if they have multiple network interfaces configured and they want to use one other than the default. You must first set openshift_set_node_ip to True. Otherwise, the SDN would attempt to use the hostname setting or try to resolve the host name for the IP.

public_hostname

  • A master instance where the VPC subnet is not configured for Auto-assign Public IP. For external access to this master, you need to have an ELB or other load balancer configured that would provide the external access needed, or you need to connect over a VPN connection to the internal name of the host.

  • A master instance where metadata is disabled.

  • This value is not actually used by the nodes.

public_ip

  • A master instance where the VPC subnet is not configured for Auto-assign Public IP.

  • A master instance where metadata is disabled.

  • This value is not actually used by the nodes.

If setting openshift_hostname to something other than the metadata-provided private-dns-name value, the native cloud integration for those providers will no longer work.

For EC2 hosts in particular, they must be deployed in a VPC that has both DNS host names and DNS resolution enabled, and openshift_hostname should not be overridden.

Post-Installation Configuration for Cloud Providers

Following the installation process, you can configure OpenShift Enterprise for AWS, OpenStack, or GCE.

Host Preparation

Before installing OpenShift Enterprise, you must first prepare each host per the following.

Software Prerequisites

Installing an Operating System

A base installation of RHEL 7.1 or later or RHEL Atomic Host 7.2.4 or later is required for master and node hosts. See the following documentation for the respective installation instructions, if required:

Registering the Hosts

Each host must be registered using Red Hat Subscription Manager (RHSM) and have an active OpenShift Enterprise subscription attached to access the required packages.

  1. On each host, register with RHSM:

    # subscription-manager register --username=<user_name> --password=<password>
  2. List the available subscriptions:

    # subscription-manager list --available
  3. In the output for the previous command, find the pool ID for an OpenShift Enterprise subscription and attach it:

    # subscription-manager attach --pool=<pool_id>

    When finding the pool ID, the related subscription name might include either "OpenShift Enterprise" or "OpenShift Container Platform", due to the product name change introduced with version 3.3.

  4. Disable all repositories and enable only the required ones:

    # subscription-manager repos --disable="*"
    # subscription-manager repos \
        --enable="rhel-7-server-rpms" \
        --enable="rhel-7-server-extras-rpms" \
        --enable="rhel-7-server-ose-3.2-rpms"

Managing Packages

For RHEL 7 systems:

  1. Install the following base packages:

    # yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
  2. Update the system to the latest packages:

    # yum update
  3. Install the following package, which provides OpenShift Enterprise utilities and pulls in other tools required by the quick and advanced installation methods, such as Ansible and related configuration files:

    # yum install atomic-openshift-utils
  4. Install the following *-excluder packages on each RHEL 7 system, which helps ensure your systems stay on the correct versions of atomic-openshift and docker packages when you are not trying to upgrade, according to the OpenShift Enterprise version:

    # yum install atomic-openshift-excluder atomic-openshift-docker-excluder
  5. The *-excluder packages add entries to the exclude directive in the host’s /etc/yum.conf file when installed. Run the following command on each host to remove the atomic-openshift packages from the list for the duration of the installation.

    # atomic-openshift-excluder unexclude

For RHEL Atomic Host 7 systems:

  1. Ensure the host is up to date by upgrading to the latest Atomic tree if one is available:

    # atomic host upgrade
  2. After the upgrade is completed and prepared for the next boot, reboot the host:

    # systemctl reboot

Installing Docker

At this point, you should install Docker on all master and node hosts. This allows you to configure your Docker storage options before installing OpenShift Enterprise.

  1. For RHEL 7 systems, install Docker 1.10.

    On RHEL Atomic Host 7 systems, Docker should already be installed, configured, and running by default.

    The atomic-openshift-docker-excluder package that was installed in Software Prerequisites should ensure that the correct version of Docker is installed in this step:

    # yum install docker

    After the package installation is complete, verify that version 1.10.3 was installed:

    # docker version
  2. Edit the /etc/sysconfig/docker file and add --insecure-registry 172.30.0.0/16 to the OPTIONS parameter. For example:

    OPTIONS='--selinux-enabled --insecure-registry 172.30.0.0/16'

    If using the Quick Installation method, you can easily script a complete installation from a kickstart or cloud-init setup, change the default configuration file:

    # sed -i '/OPTIONS=.*/c\OPTIONS="--selinux-enabled --insecure-registry 172.30.0.0/16"' \
    /etc/sysconfig/docker

    The Advanced Installation method automatically changes /etc/sysconfig/docker.

    The --insecure-registry option instructs the Docker daemon to trust any Docker registry on the indicated subnet, rather than requiring a certificate.

    172.30.0.0/16 is the default value of the servicesSubnet variable in the master-config.yaml file. If this has changed, then the --insecure-registry value in the above step should be adjusted to match, as it is indicating the subnet for the registry to use. Note that the openshift_portal_net variable can be set in the Ansible inventory file and used during the advanced installation method to modify the servicesSubnet variable.

    After the initial OpenShift Enterprise installation is complete, you can choose to secure the integrated Docker registry, which involves adjusting the --insecure-registry option accordingly.

Configuring Docker Storage

Docker containers and the images they are created from are stored in Docker’s storage back end. This storage is ephemeral and separate from any persistent storage allocated to meet the needs of your applications.

For RHEL Atomic Host

The default storage back end for Docker on RHEL Atomic Host is a thin pool logical volume, which is supported for production environments. You must ensure that enough space is allocated for this volume per the Docker storage requirements mentioned in System Requirements.

If you do not have enough allocated, see Managing Storage with Docker Formatted Containers for details on using docker-storage-setup and basic instructions on storage management in RHEL Atomic Host.

For RHEL

The default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, which is not supported for production use and only appropriate for proof of concept environments. For production environments, you must create a thin pool logical volume and re-configure Docker to use that volume.

You can use the docker-storage-setup script included with Docker to create a thin pool device and configure Docker’s storage driver. This can be done after installing Docker and should be done before creating images or containers. The script reads configuration options from the /etc/sysconfig/docker-storage-setup file and supports three options for creating the logical volume:

  • Option A) Use an additional block device.

  • Option B) Use an existing, specified volume group.

  • Option C) Use the remaining free space from the volume group where your root file system is located.

Option A is the most robust option, however it requires adding an additional block device to your host before configuring Docker storage. Options B and C both require leaving free space available when provisioning your host.

  1. Create the docker-pool volume using one of the following three options:

    • Option A) Use an additional block device.

      In /etc/sysconfig/docker-storage-setup, set DEVS to the path of the block device you wish to use. Set VG to the volume group name you wish to create; docker-vg is a reasonable choice. For example:

      # cat <<EOF > /etc/sysconfig/docker-storage-setup
      DEVS=/dev/vdc
      VG=docker-vg
      EOF

      Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

      # docker-storage-setup                                                                                                                                                                                                                                [5/1868]
      0
      Checking that no-one is using this disk right now ...
      OK
      
      Disk /dev/vdc: 31207 cylinders, 16 heads, 63 sectors/track
      sfdisk:  /dev/vdc: unrecognized partition table type
      
      Old situation:
      sfdisk: No partitions found
      
      New situation:
      Units: sectors of 512 bytes, counting from 0
      
         Device Boot    Start       End   #sectors  Id  System
      /dev/vdc1          2048  31457279   31455232  8e  Linux LVM
      /dev/vdc2             0         -          0   0  Empty
      /dev/vdc3             0         -          0   0  Empty
      /dev/vdc4             0         -          0   0  Empty
      Warning: partition 1 does not start at a cylinder boundary
      Warning: partition 1 does not end at a cylinder boundary
      Warning: no primary partition is marked bootable (active)
      This does not matter for LILO, but the DOS MBR will not boot this disk.
      Successfully wrote the new partition table
      
      Re-reading the partition table ...
      
      If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
      to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
      (See fdisk(8).)
        Physical volume "/dev/vdc1" successfully created
        Volume group "docker-vg" successfully created
        Rounding up size to full physical extent 16.00 MiB
        Logical volume "docker-poolmeta" created.
        Logical volume "docker-pool" created.
        WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
        Converted docker-vg/docker-pool to thin pool.
        Logical volume "docker-pool" changed.
    • Option B) Use an existing, specified volume group.

      In /etc/sysconfig/docker-storage-setup, set VG to the desired volume group. For example:

      # cat <<EOF > /etc/sysconfig/docker-storage-setup
      VG=docker-vg
      EOF

      Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

      # docker-storage-setup
        Rounding up size to full physical extent 16.00 MiB
        Logical volume "docker-poolmeta" created.
        Logical volume "docker-pool" created.
        WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
        Converted docker-vg/docker-pool to thin pool.
        Logical volume "docker-pool" changed.
    • Option C) Use the remaining free space from the volume group where your root file system is located.

      Verify that the volume group where your root file system resides has the desired free space, then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

      # docker-storage-setup
        Rounding up size to full physical extent 32.00 MiB
        Logical volume "docker-poolmeta" created.
        Logical volume "docker-pool" created.
        WARNING: Converting logical volume rhel/docker-pool and rhel/docker-poolmeta to pool's data and metadata volumes.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
        Converted rhel/docker-pool to thin pool.
        Logical volume "docker-pool" changed.
  2. Verify your configuration. You should have a dm.thinpooldev value in the /etc/sysconfig/docker-storage file and a docker-pool logical volume:

    # cat /etc/sysconfig/docker-storage
    DOCKER_STORAGE_OPTIONS=--storage-opt dm.fs=xfs --storage-opt
    dm.thinpooldev=/dev/mapper/docker--vg-docker--pool
    
    # lvs
      LV          VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      docker-pool rhel twi-a-t---  9.29g             0.00   0.12

    Before using Docker or OpenShift Enterprise, verify that the docker-pool logical volume is large enough to meet your needs. The docker-pool volume should be 60% of the available volume group and will grow to fill the volume group via LVM monitoring.

  3. Check if Docker is running:

    # systemctl is-active docker
  4. If Docker has not yet been started on the host, enable and start the service:

    # systemctl enable docker
    # systemctl start docker

    If Docker is already running, re-initialize Docker:

    This will destroy any Docker containers or images currently on the host.

    # systemctl stop docker
    # rm -rf /var/lib/docker/*
    # systemctl restart docker

    If there is any content in /var/lib/docker/, it must be deleted. Files will be present if Docker has been used prior to the installation of OpenShift Enterprise.

Reconfiguring Docker Storage

Should you need to reconfigure Docker storage after having created the docker-pool, you should first remove the docker-pool logical volume. If you are using a dedicated volume group, you should also remove the volume group and any associated physical volumes before reconfiguring docker-storage-setup according to the instructions above.

See Logical Volume Manager Administration for more detailed information on LVM management.

Managing Docker Container Logs

Sometimes a container’s log file (the /var/lib/docker/containers/<hash>/<hash>-json.log file on the node where the container is running) can increase to a problematic size. You can manage this by configuring Docker’s json-file logging driver to restrict the size and number of log files.

Option Purpose

--log-opt max-size

Sets the size at which a new log file is created.

--log-opt max-file

Sets the file on each host to configure the options.

For example, to set the maximum file size to 1MB and always keep the last three log files, edit the /etc/sysconfig/docker file to configure max-size=1M and max-file=3:

OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'

Next, restart the Docker service:

# systemctl restart docker

Viewing Available Container Logs

Container logs are stored in the /var/lib/docker/containers/<hash>/ directory on the node where the container is running. For example:

# ls -lh /var/lib/docker/containers/f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8/
total 2.6M
-rw-r--r--. 1 root root 5.6K Nov 24 00:12 config.json
-rw-r--r--. 1 root root 649K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log
-rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.1
-rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.2
-rw-r--r--. 1 root root 1.3K Nov 24 00:12 hostconfig.json
drwx------. 2 root root    6 Nov 24 00:12 secrets

See Docker’s documentation for additional information on how to Configure Logging Drivers.

Ensuring Host Access

The quick and advanced installation methods require a user that has access to all hosts. If you want to run the installer as a non-root user, passwordless sudo rights must be configured on each destination host.

For example, you can generate an SSH key on the host where you will invoke the installation process:

# ssh-keygen

Do not use a password.

An easy way to distribute your SSH keys is by using a bash loop:

# for host in master.example.com \
    node1.example.com \
    node2.example.com; \
    do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
    done

Modify the host names in the above command according to your configuration.

Setting Global Proxy Values

The OpenShift Enterprise installer uses the proxy settings in the _/etc/environment _ file.

Ensure the following domain suffixes and IP addresses are in the /etc/environment file in the no_proxy parameter:

  • master and node host names (domain suffix).

  • Other internal host names (domain suffix).

  • Etcd IP addresses (must be IP addresses and not host names, as etcd access is done by IP address).

  • Docker registry IP address.

  • Kubernetes IP address, by default 172.30.0.1. Must be the value set in the openshift_portal_net parameter in the Ansible inventory file, by default /etc/ansible/hosts.

  • Kubernetes internal domain suffix: cluster.local.

  • Kubernetes internal domain suffix: .svc.

The following example assumes http_proxy and https_proxy values are set:

no_proxy=.internal.example.com,10.0.0.1,10.0.0.2,10.0.0.3,.cluster.local,.svc,localhost,127.0.0.1,172.30.0.1

Because noproxy does not support CIDR, you can use domain suffixes.

What’s Next?

If you are interested in installing OpenShift Enterprise using the containerized method (optional for RHEL but required for RHEL Atomic Host), see RPM vs Containerized to ensure that you understand the differences between these methods.

When you are ready to proceed, you can install OpenShift Enterprise using the quick installation or advanced installation method.