This is a cache of https://docs.okd.io/3.6/install_config/install/host_preparation.html. It is a snapshot of the page at 2024-11-25T04:50:48.067+0000.
Host Preparation - Installing a Cluster | Installation and Configuration | OKD 3.6
×

Setting PATH

The PATH for the root user on each host must contain the following directories:

  • /bin

  • /sbin

  • /usr/bin

  • /usr/sbin

These should all be included by default in a fresh RHEL 7.x installation.

Installing Base Packages

For RHEL 7 systems:

  1. Install the following base packages:

    # yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
  2. Update the system to the latest packages:

    # yum update

For RHEL Atomic Host 7 systems:

  1. Ensure the host is up to date by upgrading to the latest Atomic tree if one is available:

    # atomic host upgrade
  2. After the upgrade is completed and prepared for the next boot, reboot the host:

    # systemctl reboot

Preparing for Advanced Installations

If you plan to use the containerized installer to run an advanced installation (currently a Technology Preview feature):

  1. Install the atomic package:

    # yum install atomic
  2. Skip to Installing Docker.

If you plan to use the RPM-based installer to run an advanced installation:

  1. Install Ansible. For convenience, the following steps are provided if you want to use EPEL as a package source for Ansible:

    1. Install the EPEL repository:

      # yum -y install \
          https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    2. Disable the EPEL repository globally so that it is not accidentally used during later steps of the installation:

      # sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo
    3. Install the packages for Ansible:

      # yum -y --enablerepo=epel install ansible pyOpenSSL
  2. Clone the openshift/openshift-ansible repository from GitHub, which provides the required playbooks and configuration files:

    # cd ~
    # git clone https://github.com/openshift/openshift-ansible
    # cd openshift-ansible

    Be sure to stay on the master branch of the openshift-ansible repository when running an advanced installation.

Installing Docker

At this point, you should install Docker on all master and node hosts. This allows you to configure your Docker storage options before installing OKD.

For RHEL 7 systems, install Docker 1.12:

On RHEL Atomic Host 7 systems, Docker should already be installed, configured, and running by default.

# yum install docker-1.12.6

After the package installation is complete, verify that version 1.12 was installed:

# rpm -V docker-1.12.6
# docker version

The Advanced Installation method automatically changes /etc/sysconfig/docker.

Configuring Docker Storage

Containers and the images they are created from are stored in Docker’s storage back end. This storage is ephemeral and separate from any persistent storage allocated to meet the needs of your applications.

For RHEL Atomic Host

The default storage back end for Docker on RHEL Atomic Host is a thin pool logical volume, which is supported for production environments. You must ensure that enough space is allocated for this volume per the Docker storage requirements mentioned in System Requirements.

If you do not have enough allocated, see Managing Storage with Docker Formatted Containers for details on using docker-storage-setup and basic instructions on storage management in RHEL Atomic Host.

For RHEL

The default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, which is not supported for production use and only appropriate for proof of concept environments. For production environments, you must create a thin pool logical volume and re-configure Docker to use that volume.

Docker stores images and containers in a graph driver, which is a pluggable storage technology, such as DeviceMapper, OverlayFS, and Btrfs. Each has advantages and disadvantages. For example, OverlayFS is faster than DeviceMapper at starting and stopping containers, but is not Portable Operating System Interface for Unix (POSIX) compliant because of the architectural limitations of a union file system and is not supported prior to Red Hat Enterprise Linux 7.2. See the Red Hat Enterprise Linux release notes for information on using OverlayFS with your version of RHEL.

For more information on the benefits and limitations of DeviceMapper and OverlayFS, see Choosing a Graph Driver.

Configuring OverlayFS

OverlayFS is a type of union file system. It allows you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified.

Comparing the Overlay Versus Overlay2 Graph Drivers has more information about the overlay and overlay2 drivers.

For information on enabling the OverlayFS storage driver for the Docker service, see the Red Hat Enterprise Linux Atomic Host documentation.

Configuring Thin Pool Storage

You can use the docker-storage-setup script included with Docker to create a thin pool device and configure Docker’s storage driver. This can be done after installing Docker and should be done before creating images or containers. The script reads configuration options from the /etc/sysconfig/docker-storage-setup file and supports three options for creating the logical volume:

  • Option A) Use an additional block device.

  • Option B) Use an existing, specified volume group.

  • Option C) Use the remaining free space from the volume group where your root file system is located.

Option A is the most robust option, however it requires adding an additional block device to your host before configuring Docker storage. Options B and C both require leaving free space available when provisioning your host. Option C is known to cause issues with some applications, for example Red Hat Mobile Application Platform (RHMAP).

  1. Create the docker-pool volume using one of the following three options:

    • Option A) Use an additional block device.

      In /etc/sysconfig/docker-storage-setup, set DEVS to the path of the block device you wish to use. Set VG to the volume group name you wish to create; docker-vg is a reasonable choice. For example:

      # cat <<EOF > /etc/sysconfig/docker-storage-setup
      DEVS=/dev/vdc
      VG=docker-vg
      EOF

      Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

      # docker-storage-setup                                                                                                                                                                                                                                [5/1868]
      0
      Checking that no-one is using this disk right now ...
      OK
      
      Disk /dev/vdc: 31207 cylinders, 16 heads, 63 sectors/track
      sfdisk:  /dev/vdc: unrecognized partition table type
      
      Old situation:
      sfdisk: No partitions found
      
      New situation:
      Units: sectors of 512 bytes, counting from 0
      
         Device Boot    Start       End   #sectors  Id  System
      /dev/vdc1          2048  31457279   31455232  8e  Linux LVM
      /dev/vdc2             0         -          0   0  Empty
      /dev/vdc3             0         -          0   0  Empty
      /dev/vdc4             0         -          0   0  Empty
      Warning: partition 1 does not start at a cylinder boundary
      Warning: partition 1 does not end at a cylinder boundary
      Warning: no primary partition is marked bootable (active)
      This does not matter for LILO, but the DOS MBR will not boot this disk.
      Successfully wrote the new partition table
      
      Re-reading the partition table ...
      
      If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
      to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
      (See fdisk(8).)
        Physical volume "/dev/vdc1" successfully created
        Volume group "docker-vg" successfully created
        Rounding up size to full physical extent 16.00 MiB
        Logical volume "docker-poolmeta" created.
        Logical volume "docker-pool" created.
        WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
        Converted docker-vg/docker-pool to thin pool.
        Logical volume "docker-pool" changed.
    • Option B) Use an existing, specified volume group.

      In /etc/sysconfig/docker-storage-setup, set VG to the desired volume group. For example:

      # cat <<EOF > /etc/sysconfig/docker-storage-setup
      VG=docker-vg
      EOF

      Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

      # docker-storage-setup
        Rounding up size to full physical extent 16.00 MiB
        Logical volume "docker-poolmeta" created.
        Logical volume "docker-pool" created.
        WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
        Converted docker-vg/docker-pool to thin pool.
        Logical volume "docker-pool" changed.
    • Option C) Use the remaining free space from the volume group where your root file system is located.

      Verify that the volume group where your root file system resides has the desired free space, then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

      # docker-storage-setup
        Rounding up size to full physical extent 32.00 MiB
        Logical volume "docker-poolmeta" created.
        Logical volume "docker-pool" created.
        WARNING: Converting logical volume rhel/docker-pool and rhel/docker-poolmeta to pool's data and metadata volumes.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
        Converted rhel/docker-pool to thin pool.
        Logical volume "docker-pool" changed.
  2. Verify your configuration. You should have a dm.thinpooldev value in the /etc/sysconfig/docker-storage file and a docker-pool logical volume:

    # cat /etc/sysconfig/docker-storage
    DOCKER_STORAGE_OPTIONS=--storage-opt dm.fs=xfs --storage-opt
    dm.thinpooldev=/dev/mapper/docker--vg-docker--pool
    
    # lvs
      LV          VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      docker-pool rhel twi-a-t---  9.29g             0.00   0.12

    Before using Docker or OKD, verify that the docker-pool logical volume is large enough to meet your needs. The docker-pool volume should be 60% of the available volume group and will grow to fill the volume group via LVM monitoring.

  3. Check if Docker is running:

    # systemctl is-active docker
  4. If Docker has not yet been started on the host, enable and start the service:

    # systemctl enable docker
    # systemctl start docker

    If Docker is already running, re-initialize Docker:

    This will destroy any containers or images currently on the host.

    # systemctl stop docker
    # rm -rf /var/lib/docker/*
    # systemctl restart docker

    If there is any content in /var/lib/docker/, it must be deleted. Files will be present if Docker has been used prior to the installation of OKD.

Reconfiguring Docker Storage

Should you need to reconfigure Docker storage after having created the docker-pool, you should first remove the docker-pool logical volume. If you are using a dedicated volume group, you should also remove the volume group and any associated physical volumes before reconfiguring docker-storage-setup according to the instructions above.

See Logical Volume Manager Administration for more detailed information on LVM management.

Enabling Image Signature Support

OKD is capable of cryptographically verifying images are from trusted sources. The Container Security Guide provides a high-level description of how image signing works.

You can configure image signature verification using the atomic command line interface (CLI), version 1.12.5 or greater.

Install the atomic package if it is not installed on the host system:

$ yum install atomic

The atomic trust sub-command manages trust configuration. The default configuration is to whitelist all registries. This means no signature verification is configured.

$ atomic trust show
* (default)                         accept

A reasonable configuration might be to whitelist a particular registry or namespace, blacklist (reject) untrusted registries, and require signature verification on a vendor registry. The following set of commands performs this example configuration:

Example Atomic Trust Configuration
$ atomic trust add --type insecureAcceptAnything 172.30.1.1:5000

$ atomic trust add --sigstoretype atomic \
  --pubkeys pub@example.com \
  172.30.1.1:5000/production

$ atomic trust add --sigstoretype atomic \
  --pubkeys /etc/pki/example.com.pub \
  172.30.1.1:5000/production

$ atomic trust add --sigstoretype web \
  --sigstore https://access.redhat.com/webassets/docker/content/sigstore \
  --pubkeys /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release \
  registry.access.redhat.com

# atomic trust show
* (default)                         accept
172.30.1.1:5000                     accept
172.30.1.1:5000/production          signed security@example.com
registry.access.redhat.com          signed security@redhat.com,security@redhat.com

When all the signed sources are verified, nodes may be further hardened with a global reject default:

$ atomic trust default reject

$ atomic trust show
* (default)                         reject
172.30.1.1:5000                     accept
172.30.1.1:5000/production          signed security@example.com
registry.access.redhat.com          signed security@redhat.com,security@redhat.com

Use the atomic man page man atomic-trust for additional examples.

The following files and directories comprise the trust configuration of a host:

  • /etc/containers/registries.d/*

  • /etc/containers/policy.json

The trust configuration may be managed directly on each node or the generated files managed on a separate host and distributed to the appropriate nodes using Ansible, for example. See this Red Hat Knowledgebase Article for an example of automating file distribution with Ansible.

Managing Container Logs

Sometimes a container’s log file (the /var/lib/docker/containers/<hash>/<hash>-json.log file on the node where the container is running) can increase to a problematic size. You can manage this by configuring Docker’s json-file logging driver to restrict the size and number of log files.

Option Purpose

--log-opt max-size

Sets the size at which a new log file is created.

--log-opt max-file

Sets the file on each host to configure the options.

For example, to set the maximum file size to 1MB and always keep the last three log files, edit the /etc/sysconfig/docker file to configure max-size=1M and max-file=3:

OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'

Next, restart the Docker service:

# systemctl restart docker

Viewing Available Container Logs

Container logs are stored in the /var/lib/docker/containers/<hash>/ directory on the node where the container is running. For example:

# ls -lh /var/lib/docker/containers/f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8/
total 2.6M
-rw-r--r--. 1 root root 5.6K Nov 24 00:12 config.json
-rw-r--r--. 1 root root 649K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log
-rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.1
-rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.2
-rw-r--r--. 1 root root 1.3K Nov 24 00:12 hostconfig.json
drwx------. 2 root root    6 Nov 24 00:12 secrets

See Docker’s documentation for additional information on how to configure logging drivers.

Blocking Local Volume Usage

When a volume is provisioned using the VOLUME instruction in a Dockerfile or using the docker run -v <volumename> command, a host’s storage space is used. Using this storage can lead to an unexpected out of space issue and could bring down the host.

In OKD, users trying to run their own images risk filling the entire storage space on a node host. One solution to this issue is to prevent users from running images with volumes. This way, the only storage a user has access to can be limited, and the cluster administrator can assign storage quota.

Using docker-novolume-plugin solves this issue by disallowing starting a container with local volumes defined. In particular, the plug-in blocks docker run commands that contain:

  • The --volumes-from option

  • Images that have VOLUME(s) defined

  • References to existing volumes that were provisioned with the docker volume command

The plug-in does not block references to bind mounts.

To enable docker-novolume-plugin, perform the following steps on each node host:

  1. Install the docker-novolume-plugin package:

    $ yum install docker-novolume-plugin
  2. Enable and start the docker-novolume-plugin service:

    $ systemctl enable docker-novolume-plugin
    $ systemctl start docker-novolume-plugin
  3. Edit the /etc/sysconfig/docker file and append the following to the OPTIONS list:

    --authorization-plugin=docker-novolume-plugin
  4. Restart the docker service:

    $ systemctl restart docker

After you enable this plug-in, containers with local volumes defined fail to start and show the following error message:

runContainer: API error (500): authorization denied by plugin
docker-novolume-plugin: volumes are not allowed

Ensuring Host Access

The advanced installation method requires a user that has access to all hosts. If you want to run the installer as a non-root user, passwordless sudo rights must be configured on each destination host.

For example, you can generate an SSH key on the host where you will invoke the installation process:

# ssh-keygen

Do not use a password.

An easy way to distribute your SSH keys is by using a bash loop:

# for host in master.example.com \
    node1.example.com \
    node2.example.com; \
    do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
    done

Modify the host names in the above command according to your configuration.

Setting Proxy Overrides

If the /etc/environment file on your nodes contains either an http_proxy or https_proxy value, you must also set a no_proxy value in that file to allow open communication between OKD components.

The no_proxy parameter in /etc/environment file is not the same value as the global proxy values that you set in your inventory file. The global proxy values configure specific OKD services with your proxy settings. See Configuring Global Proxy Options for details.

If the /etc/environment file contains proxy values, define the following values in the no_proxy parameter of that file on each node:

  • Master and node host names or their domain suffix.

  • Other internal host names or their domain suffix.

  • etcd IP addresses. You must provide IP addresses and not host names because etcd access is controlled by IP address.

  • Kubernetes IP address, by default 172.30.0.1. Must be the value set in the openshift_portal_net parameter in your inventory file.

  • Kubernetes internal domain suffix, cluster.local.

  • Kubernetes internal domain suffix, .svc.

Because no_proxy does not support CIDR, you can use domain suffixes.

If you use either an http_proxy or https_proxy value, your no_proxy parameter value resembles the following example:

no_proxy=.internal.example.com,10.0.0.1,10.0.0.2,10.0.0.3,.cluster.local,.svc,localhost,127.0.0.1,172.30.0.1

What’s Next?

If you are interested in installing OKD using the containerized method (optional for Fedora, CentOS, or RHEL but required for RHEL Atomic Host), see Installing on Containerized Hosts to prepare your hosts.

If you came here from Getting Started for Administrators, you can now continue there by choosing an installation method. Alternatively, you can install OKD using the advanced installation method.

If you are installing a stand-alone registry, continue with Installing a Stand-alone Registry.