This is a cache of https://docs.openshift.com/container-platform/4.7/installing/installing_bare_metal_ipi/ipi-install-installation-workflow.html. It is a snapshot of the page at 2024-11-22T21:29:36.135+0000.
S<strong>e</strong>tting up th<strong>e</strong> <strong>e</strong>nvironm<strong>e</strong>nt for an Op<strong>e</strong>nShift installation - D<strong>e</strong>ploying install<strong>e</strong>r-provision<strong>e</strong>d clust<strong>e</strong>rs on bar<strong>e</strong> m<strong>e</strong>tal | Installing | Op<strong>e</strong>nShift Contain<strong>e</strong>r Platform 4.7
&times;

Installing RHeL on the provisioner node

With the networking configuration complete, the next step is to install RHeL 8.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHeL on the provisioner node is out of scope. However, options include but are not limited to using a RHeL Satellite server, PXe, or installation media.

Preparing the provisioner node for OpenShift Container Platform installation

Perform the following steps to prepare the environment.

Procedure
  1. Log in to the provisioner node via ssh.

  2. Create a non-root user (kni) and provide that user with sudo privileges:

    # useradd kni
    # passwd kni
    # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni
    # chmod 0440 /etc/sudoers.d/kni
  3. Create an ssh key for the new user:

    # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''"
  4. Log in as the new user on the provisioner node:

    # su - kni
    $
  5. Use Red Hat Subscription Manager to register the provisioner node:

    $ sudo subscription-manager register --username=<user> --password=<pass> --auto-attach
    $ sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms

    For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager.

  6. Install the following packages:

    $ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
  7. Modify the user to add the libvirt group to the newly created user:

    $ sudo usermod --append --groups libvirt <user>
  8. Restart firewalld and enable the http service:

    $ sudo systemctl start firewalld
    $ sudo firewall-cmd --zone=public --add-service=http --permanent
    $ sudo firewall-cmd --reload
  9. Start and enable the libvirtd service:

    $ sudo systemctl enable libvirtd --now
  10. Create the default storage pool and start it:

    $ sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
    $ sudo virsh pool-start default
    $ sudo virsh pool-autostart default
  11. Configure networking.

    You can also configure networking from the web console.

    export the baremetal network NIC name:

    $ export PUB_CONN=<baremetal_nic_name>

    Configure the baremetal network:

    $ sudo nohup bash -c "
        nmcli con down \"$PUB_CONN\"
        nmcli con delete \"$PUB_CONN\"
        # RHeL 8.1 appends the word \"System\" in front of the connection, delete in case it exists
        nmcli con down \"System $PUB_CONN\"
        nmcli con delete \"System $PUB_CONN\"
        nmcli connection add ifname baremetal type bridge con-name baremetal
        nmcli con add type bridge-slave ifname \"$PUB_CONN\" master baremetal
        pkill dhclient;dhclient baremetal
    "

    If you are deploying with a provisioning network, export the provisioning network NIC name:

    $ export PROV_CONN=<prov_nic_name>

    If you are deploying with a provisioning network, configure the provisioning network:

    $ sudo nohup bash -c "
        nmcli con down \"$PROV_CONN\"
        nmcli con delete \"$PROV_CONN\"
        nmcli connection add ifname provisioning type bridge con-name provisioning
        nmcli con add type bridge-slave ifname \"$PROV_CONN\" master provisioning
        nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual
        nmcli con down provisioning
        nmcli con up provisioning
    "

    The ssh connection might disconnect after executing these steps.

    The IPv6 address can be any address as long as it is not routable via the baremetal network.

    ensure that UeFI is enabled and UeFI PXe settings are set to the IPv6 protocol when using IPv6 addressing.

  12. Configure the IPv4 address on the provisioning network connection.

    $ nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual
  13. ssh back into the provisioner node (if required).

    # ssh kni@provisioner.<cluster-name>.<domain>
  14. Verify the connection bridges have been properly created.

    $ sudo nmcli con show
    NAMe               UUID                                  TYPe      DeVICe
    baremetal          4d5133a5-8351-4bb9-bfd4-3af264801530  bridge    baremetal
    provisioning       43942805-017f-4d7d-a2c2-7cb3324482ed  bridge    provisioning
    virbr0             d9bca40f-eee1-410b-8879-a2d4bb0465e7  bridge    virbr0
    bridge-slave-eno1  76a8ed50-c7e5-4999-b4f6-6d9014dd0812  ethernet  eno1
    bridge-slave-eno2  f31c3353-54b7-48de-893a-02d2b34c4736  ethernet  eno2
  15. Create a pull-secret.txt file.

    $ vim pull-secret.txt

    In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure, and scroll down to the Downloads section. Click Copy pull secret. Paste the contents into the pull-secret.txt file and save the contents in the kni user’s home directory.

Retrieving the OpenShift Container Platform installer

Use the latest-4.x version of the installer to deploy the latest generally available version of OpenShift Container Platform:

$ export VeRSION=latest-4.7
export ReLeASe_IMAGe=$(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VeRSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}')
Additional resources

extracting the OpenShift Container Platform installer

After retrieving the installer, the next step is to extract it.

Procedure
  1. Set the environment variables:

    $ export cmd=openshift-baremetal-install
    $ export pullsecret_file=~/pull-secret.txt
    $ export extract_dir=$(pwd)
  2. Get the oc binary:

    $ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VeRSION/openshift-client-linux.tar.gz | tar zxvf - oc
  3. extract the installer:

    $ sudo cp oc /usr/local/bin
    $ oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${ReLeASe_IMAGe}
    $ sudo cp openshift-baremetal-install /usr/local/bin

Creating an RHCOS images cache (optional)

To employ image caching, you must download two images: the Red Hat enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM and the RHCOS image used by the installer to provision the different nodes. Image caching is optional, but especially useful when running the installer on a network with limited bandwidth.

If you are running the installer on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installer will timeout. Caching images on a web server will help in such scenarios.

Use the following steps to install a container that contains the images.

  1. Install podman.

    $ sudo dnf install -y podman
  2. Open firewall port 8080 to be used for RHCOS image caching.

    $ sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent
    $ sudo firewall-cmd --reload
  3. Create a directory to store the bootstraposimage and clusterosimage.

    $ mkdir /home/kni/rhcos_image_cache
  4. Set the appropriate SeLinux context for the newly created directory.

    $ sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?"
    $ sudo restorecon -Rv rhcos_image_cache/
  5. Get the commit ID from the installer. The ID determines which images the installer needs to download.

    $ export COMMIT_ID=$(/usr/local/bin/openshift-baremetal-install version | grep '^built from commit' | awk '{print $4}')
  6. Get the URI for the RHCOS image that the installer will deploy on the nodes.

    $ export RHCOS_OPeNSTACK_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json  | jq .images.openstack.path | sed 's/"//g')
  7. Get the URI for the RHCOS image that the installer will deploy on the bootstrap VM.

    $ export RHCOS_QeMU_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json  | jq .images.qemu.path | sed 's/"//g')
  8. Get the path where the images are published.

    $ export RHCOS_PATH=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .baseURI | sed 's/"//g')
  9. Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM.

    $ export RHCOS_QeMU_SHA_UNCOMPReSSeD=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json  | jq -r '.images.qemu["uncompressed-sha256"]')
  10. Get the SHA hash for the RHCOS image that will be deployed on the nodes.

    $ export RHCOS_OPeNSTACK_SHA_COMPReSSeD=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json  | jq -r '.images.openstack.sha256')
  11. Download the images and place them in the /home/kni/rhcos_image_cache directory.

    $ curl -L ${RHCOS_PATH}${RHCOS_QeMU_URI} -o /home/kni/rhcos_image_cache/${RHCOS_QeMU_URI}
    $ curl -L ${RHCOS_PATH}${RHCOS_OPeNSTACK_URI} -o /home/kni/rhcos_image_cache/${RHCOS_OPeNSTACK_URI}
  12. Confirm SeLinux type is of httpd_sys_content_t for the newly created files.

    $ ls -Z /home/kni/rhcos_image_cache
  13. Create the pod.

    $ podman run -d --name rhcos_image_cache \
    -v /home/kni/rhcos_image_cache:/var/www/html \
    -p 8080:8080/tcp \
    quay.io/centos7/httpd-24-centos7:latest

Configuration files

Configuring the install-config.yaml file

The install-config.yaml file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available hardware so that it is able to fully manage it.

  1. Configure install-config.yaml. Change the appropriate variables to match the environment, including pullSecret and sshKey.

    apiVersion: v1
    baseDomain: <domain>
    metadata:
      name: <cluster-name>
    networking:
      machineCIDR: <public-cidr>
      networkType: OVNKubernetes
    compute:
    - name: worker
      replicas: 2 (1)
    controlPlane:
      name: master
      replicas: 3
      platform:
        baremetal: {}
    platform:
      baremetal:
        apiVIP: <api-ip>
        ingressVIP: <wildcard-ip>
        provisioningNetworkInterface: <NIC1>
        provisioningNetworkCIDR: <CIDR>
        hosts:
          - name: openshift-master-0
            role: master
            bmc:
              address: ipmi://<out-of-band-ip> (2)
              username: <user>
              password: <password>
            bootMACAddress: <NIC1-mac-address>
            hardwareProfile: default
          - name: <openshift-master-1>
            role: master
            bmc:
              address: ipmi://<out-of-band-ip> (2)
              username: <user>
              password: <password>
            bootMACAddress: <NIC1-mac-address>
            hardwareProfile: default
          - name: <openshift-master-2>
            role: master
            bmc:
              address: ipmi://<out-of-band-ip> (2)
              username: <user>
              password: <password>
            bootMACAddress: <NIC1-mac-address>
            hardwareProfile: default
          - name: <openshift-worker-0>
            role: worker
            bmc:
              address: ipmi://<out-of-band-ip> (2)
              username: <user>
              password: <password>
            bootMACAddress: <NIC1-mac-address>
            hardwareProfile: unknown
          - name: <openshift-worker-1>
            role: worker
            bmc:
              address: ipmi://<out-of-band-ip>
              username: <user>
              password: <password>
            bootMACAddress: <NIC1-mac-address>
            hardwareProfile: unknown
    pullSecret: '<pull_secret>'
    sshKey: '<ssh_pub_key>'
    1 Scale the worker machines based on the number of worker nodes that are part of the OpenShift Container Platform cluster.
    2 Refer to the BMC addressing sections for more options.
  2. Create a directory to store cluster configs.

    $ mkdir ~/clusterconfigs
    $ cp install-config.yaml ~/clusterconfigs
  3. ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster.

    $ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
  4. Remove old bootstrap resources if any are left over from a previous deployment attempt.

    for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
    do
      sudo virsh destroy $i;
      sudo virsh undefine $i;
      sudo virsh vol-delete $i --pool $i;
      sudo virsh vol-delete $i.ign --pool $i;
      sudo virsh pool-destroy $i;
      sudo virsh pool-undefine $i;
    done

Setting proxy settings within the install-config.yaml file (optional)

To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file.

apiVersion: v1
baseDomain: <domain>
proxy:
  httpProxy: http://USeRNAMe:PASSWORD@proxy.example.com:PORT
  httpsProxy: https://USeRNAMe:PASSWORD@proxy.example.com:PORT
  noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NeTWORK/CIDR>,<BMC_ADDReSS_RANGe/CIDR>

The following is an example of noProxy with values.

noProxy: .example.com,172.22.0.0/24,10.10.0.0/24

With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair.

Key considerations:

  • If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http://.

  • If using a provisioning network, include it in the noProxy setting, otherwise the installer will fail.

  • Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY, HTTPS_PROXY, and NO_PROXY.

When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately.

Modifying the install-config.yaml file for no provisioning network (optional)

To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file.

platform:
  baremetal:
    apiVIP: <apiVIP>
    ingressVIP: <ingress/wildcard VIP>
    provisioningNetwork: "Disabled"

Configuring managed Secure Boot in the install-config.yaml file (optional)

You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish, redfish-virtualmedia, or idrac-virtualmedia. To enable managed Secure Boot, add the bootMode configuration setting to each node:

example
hosts:
  - name: openshift-master-0
    role: master
    bmc:
      address: redfish://<out_of_band_ip> (1)
      username: <user>
      password: <password>
    bootMACAddress: <NIC1_mac_address>
    rootDeviceHints:
     deviceName: "/dev/sda"
    bootMode: UeFISecureBoot (2)
1 ensure the bmc.address setting uses redfish, redfish-virtualmedia, or idrac-virtualmedia as the protocol. For additional information, see "BMC addressing for HPe iLO" or "BMC addressing for Dell iDRAC".
2 The bootMode setting is UeFI by default. To enable managed Secure Boot, change it to UeFISecureBoot.

To ensure the nodes can support managed Secure Boot, see "Configuring nodes" in the "Prerequisites". If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media.

Red Hat does not support Secure Boot with IPMI because IPMI does not provide Secure Boot management facilities.

Additional install-config parameters

See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file.

Table 1. Required parameters
Parameters Default Description

baseDomain

The domain name for the cluster. For example, example.com.

sshKey

The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node.

pullSecret

The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node.

metadata:
    name:

The name to be given to the OpenShift Container Platform cluster. For example, openshift.

networking:
    machineCIDR:

The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 .

compute:
  - name: worker

The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes.

compute:
    replicas: 2

Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster.

controlPlane:
    name: master

The OpenShift Container Platform cluster requires a name for control plane (master) nodes.

controlPlane:
    replicas: 3

Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster.

defaultMachinePlatform

The default configuration used for machine pools without a platform configuration.

apiVIP

api.<clustername.clusterdomain>

The VIP to use for internal API communication.

This setting must either be provided or pre-configured in the DNS so that the default name resolves correctly.

disableCertificateVerification

False

redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses.

ingressVIP

test.apps.<clustername.clusterdomain>

The VIP to use for ingress traffic.

Table 2. Optional Parameters
Parameters Default Description

provisioningDHCPRange

172.22.0.10,172.22.0.100

Defines the IP range for nodes on the provisioning network.

provisioningNetworkCIDR

172.22.0.0/24

The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network.

clusterProvisioningIP

The third IP address of the provisioningNetworkCIDR.

The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3.

bootstrapProvisioningIP

The second IP address of the provisioningNetworkCIDR.

The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 .

externalBridge

baremetal

The name of the baremetal bridge of the hypervisor attached to the baremetal network.

provisioningBridge

provisioning

The name of the provisioning bridge on the provisioner host attached to the provisioning network.

defaultMachinePlatform

The default configuration used for machine pools without a platform configuration.

bootstrapOSImage

A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256>; .

clusterOSImage

A URL to override the default operating system for cluster nodes. The URL must include a SHA-256 hash of the image. For example, https://mirror.openshift.com/images/rhcos-<version>-openstack.qcow2.gz?sha256=<compressed_sha256>;.

provisioningNetwork

Set this parameter to Disabled to disable the requirement for a provisioning network. User may only do virtual media based provisioning, or bring up the cluster using assisted installation. If using power management, BMC’s must be accessible from the machine networks. User must provide two IP addresses on the external network that are used for the provisioning services. Set this parameter to managed, which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on.

Set this parameter to unmanaged to still enable the provisioning network but take care of manual configuration of DHCP. Virtual Media provisioning is recommended but PXe is still available if required.

httpProxy

Set this parameter to the appropriate HTTP proxy used within your environment.

httpsProxy

Set this parameter to the appropriate HTTPS proxy used within your environment.

noProxy

Set this parameter to the appropriate list of exclusions for proxy usage within your environment.

Hosts

The hosts parameter is a list of separate bare metal assets used to build the cluster.

Name

Default

Description

name

The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0.

role

The role of the bare metal node. either master or worker.

bmc

Connection details for the baseboard management controller. See the BMC addressing section for additional details.

bootMACAddress

The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host.

You must provide a valid MAC address from the host if you disabled the provisioning network.

BMC addressing

Most vendors support BMC addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common Internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI.

IPMI

Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: ipmi://<out-of-band-ip>
          username: <user>
          password: <password>
Redfish network boot

To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
          disableCertificateVerification: True

BMC addressing for Dell

The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.

platform:
  baremetal:
    hosts:
      - name: <hostname>
        role: <master | worker>
        bmc:
          address: <address>
          username: <user>
          password: <password>

For Dell hardware, Red Hat supports Redfish virtual media, Redfish network boot, and IPMI.

Table 3. BMC address formats for Dell hardware
Protocol Address Format

Redfish virtual media

idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.embedded.1

Redfish network boot

redfish://<out-of-band-ip>/redfish/v1/Systems/System.embedded.1

IPMI

ipmi://<out-of-band-ip>

Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell’s idrac-virtualmedia uses the Redfish standard with Dell’s OeM extensions.

See the following sections for additional details.

Redfish virtual media for Dell

For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work.

The following example demonstrates using iDRAC virtual media within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.embedded.1
          username: <user>
          password: <password>

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.embedded.1
          username: <user>
          password: <password>
          disableCertificateVerification: True

Currently, Redfish is only supported on Dell with iDRAC firmware versions 4.20.20.20 through 04.40.00.00 for installer-provisioned installations on bare metal deployments. There is a known issue with version 04.40.00.00. With iDRAC 9 firmware version 04.40.00.00, the Virtual Console plug-in defaults to eHTML5, which causes problems with the InsertVirtualMedia workflow. Set the plug-in to HTML5 to avoid this issue. The menu path is: ConfigurationVirtual consolePlug-in TypeHTML5 .

ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: ConfigurationVirtual MediaAttach ModeAutoAttach .

Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell’s idrac-virtualmedia:// protocol uses the Redfish standard with Dell’s OeM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware.

Redfish network boot for Dell

To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.embedded.1
          username: <user>
          password: <password>

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.embedded.1
          username: <user>
          password: <password>
          disableCertificateVerification: True

Currently, Redfish is only supported on Dell with iDRAC firmware versions 4.20.20.20 through 04.40.00.00 for installer-provisioned installations on bare metal deployments. There is a known issue with version 04.40.00.00. With iDRAC 9 firmware version 04.40.00.00, the Virtual Console plug-in defaults to eHTML5, which causes problems with the InsertVirtualMedia workflow. Set the plug-in to HTML5 to avoid this issue. The menu path is: ConfigurationVirtual consolePlug-in TypeHTML5 .

ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: ConfigurationVirtual MediaAttach ModeAutoAttach .

The redfish:// URL protocol corresponds to the redfish hardware type in Ironic.

BMC addressing for HPe

The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network.

platform:
  baremetal:
    hosts:
      - name: <hostname>
        role: <master | worker>
        bmc:
          address: <address>
          username: <user>
          password: <password>

For HPe hardware, Red Hat supports Redfish virtual media, Redfish network boot, and IPMI.

Table 4. BMC address formats for HPe hardware
Protocol Address Format

Redfish virtual media

redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1

Redfish network boot

redfish://<out-of-band-ip>/redfish/v1/Systems/1

IPMI

ipmi://<out-of-band-ip>

See the following sections for additional details.

Redfish virtual media for HPe

To enable Redfish virtual media for HPe servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
          disableCertificateVerification: True

Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media.

Redfish network boot for HPe

To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>

While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file.

platform:
  baremetal:
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
          username: <user>
          password: <password>
          disableCertificateVerification: True

Root device hints

The rootDeviceHints parameter enables the installer to provision the Red Hat enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it.

Table 5. Subfields
Subfield Description

deviceName

A string containing a Linux device name like /dev/vda. The hint must match the actual value exactly.

hctl

A string containing a SCSI bus address like 0:0:0:0. The hint must match the actual value exactly.

model

A string containing a vendor-specific device identifier. The hint can be a substring of the actual value.

vendor

A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value.

serialNumber

A string containing the device serial number. The hint must match the actual value exactly.

minSizeGigabytes

An integer representing the minimum size of the device in gigabytes.

wwn

A string containing the unique storage identifier. The hint must match the actual value exactly.

wwnWithextension

A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly.

wwnVendorextension

A string containing the unique vendor storage identifier. The hint must match the actual value exactly.

rotational

A boolean indicating whether the device should be a rotating disk (true) or not (false).

example usage
     - name: master-0
       role: master
       bmc:
         address: ipmi://10.10.0.3:6203
         username: admin
         password: redhat
       bootMACAddress: de:ad:be:ef:00:40
       rootDeviceHints:
         deviceName: "/dev/sda"

Creating the OpenShift Container Platform manifests

  1. Create the OpenShift Container Platform manifests.

    $ ./openshift-baremetal-install --dir ~/clusterconfigs create manifests
    INFO Consuming Install Config from target directory
    WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
    WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated

Creating a disconnected registry (optional)

In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet.

A local, or mirrored, copy of the registry requires the following:

  • A certificate for the registry node. This can be a self-signed certificate.

  • A web server that a container on a system will serve.

  • An updated pull secret that contains the certificate and local repository information.

Creating a disconnected registry on a registry node is optional. The subsequent sections indicate that they are optional since they are steps you need to execute only when creating a disconnected registry on a registry node. You should execute all of the subsequent sub-sections labeled "(optional)" when creating a disconnected registry on a registry node.

Preparing the registry node to host the mirrored registry (optional)

Make the following changes to the registry node.

Procedure
  1. Open the firewall port on the registry node.

    $ sudo firewall-cmd --add-port=5000/tcp --zone=libvirt  --permanent
    $ sudo firewall-cmd --add-port=5000/tcp --zone=public   --permanent
    $ sudo firewall-cmd --reload
  2. Install the required packages for the registry node.

    $ sudo yum -y install python3 podman httpd httpd-tools jq
  3. Create the directory structure where the repository information will be held.

    $ sudo mkdir -p /opt/registry/{auth,certs,data}

Generating the self-signed certificate (optional)

Generate a self-signed certificate for the registry node and put it in the /opt/registry/certs directory.

Procedure
  1. Adjust the certificate information as appropriate.

    $ host_fqdn=$( hostname --long )
    $ cert_c="<Country Name>"   # Country Name (C, 2 letter code)
    $ cert_s="<State>"          # Certificate State (S)
    $ cert_l="<Locality>"       # Certificate Locality (L)
    $ cert_o="<Organization>"   # Certificate Organization (O)
    $ cert_ou="<Org Unit>"      # Certificate Organizational Unit (OU)
    $ cert_cn="${host_fqdn}"    # Certificate Common Name (CN)
    
    $ openssl req \
        -newkey rsa:4096 \
        -nodes \
        -sha256 \
        -keyout /opt/registry/certs/domain.key \
        -x509 \
        -days 365 \
        -out /opt/registry/certs/domain.crt \
        -addext "subjectAltName = DNS:${host_fqdn}" \
        -subj "/C=${cert_c}/ST=${cert_s}/L=${cert_l}/O=${cert_o}/OU=${cert_ou}/CN=${cert_cn}"
    When replacing <Country Name>, ensure that it only contains two letters. For example, US.
  2. Update the registry node’s ca-trust with the new certificate.

    $ sudo cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/
    $ sudo update-ca-trust extract

Creating the registry podman container (optional)

The registry container uses the /opt/registry directory for certificates, authentication files, and to store its data files.

The registry container uses httpd and needs an htpasswd file for authentication.

Procedure
  1. Create an htpasswd file in /opt/registry/auth for the container to use.

    $ htpasswd -bBc /opt/registry/auth/htpasswd <user> <passwd>

    Replace <user> with the user name and <passwd> with the password.

  2. Create and start the registry container.

    $ podman create \
      --name ocpdiscon-registry \
      -p 5000:5000 \
      -e "ReGISTRY_AUTH=htpasswd" \
      -e "ReGISTRY_AUTH_HTPASSWD_ReALM=Registry" \
      -e "ReGISTRY_HTTP_SeCReT=ALongRandomSecretForRegistry" \
      -e "ReGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
      -e "ReGISTRY_HTTP_TLS_CeRTIFICATe=/certs/domain.crt" \
      -e "ReGISTRY_HTTP_TLS_KeY=/certs/domain.key" \
      -e "ReGISTRY_COMPATIBILITY_SCHeMA1_eNABLeD=true" \
      -v /opt/registry/data:/var/lib/registry:z \
      -v /opt/registry/auth:/auth:z \
      -v /opt/registry/certs:/certs:z \
      docker.io/library/registry:2
    $ podman start ocpdiscon-registry

Copy and update the pull-secret (optional)

Copy the pull secret file from the provisioner node to the registry node and modify it to include the authentication information for the new registry node.

Procedure
  1. Copy the pull-secret.txt file.

    $ scp kni@provisioner:/home/kni/pull-secret.txt pull-secret.txt
  2. Update the host_fqdn environment variable with the fully qualified domain name of the registry node.

    $ host_fqdn=$( hostname --long )
  3. Update the b64auth environment variable with the base64 encoding of the http credentials used to create the htpasswd file.

    $ b64auth=$( echo -n '<username>:<passwd>' | openssl base64 )

    Replace <username> with the user name and <passwd> with the password.

  4. Set the AUTHSTRING environment variable to use the base64 authorization string. The $USeR variable is an environment variable containing the name of the current user.

    $ AUTHSTRING="{\"$host_fqdn:5000\": {\"auth\": \"$b64auth\",\"email\": \"$USeR@redhat.com\"}}"
  5. Update the pull-secret.txt file.

    $ jq ".auths += $AUTHSTRING" < pull-secret.txt > pull-secret-update.txt

Mirroring the repository (optional)

Procedure
  1. Copy the oc binary from the provisioner node to the registry node.

    $ sudo scp kni@provisioner:/usr/local/bin/oc /usr/local/bin
  2. Set the required environment variables.

    1. Set the release version:

      $ VeRSION=<release_version>

      For <release_version>, specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.7.

    2. Set the local registry name and host port:

      $ LOCAL_ReG='<local_registry_host_name>:<local_registry_host_port>'

      For <local_registry_host_name>, specify the registry domain name for your mirror repository, and for <local_registry_host_port>, specify the port that it serves content on.

    3. Set the local repository name:

      $ LOCAL_RePO='<local_repository_name>'

      For <local_repository_name>, specify the name of the repository to create in your registry, such as ocp4/openshift4.

  3. Mirror the remote install images to the local repository.

    $ /usr/local/bin/oc adm release mirror \
      -a pull-secret-update.txt \
      --from=$UPSTReAM_RePO \
      --to-release-image=$LOCAL_ReG/$LOCAL_RePO:${VeRSION} \
      --to=$LOCAL_ReG/$LOCAL_RePO

Modify the install-config.yaml file to use the disconnected registry (optional)

On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node’s certificate and registry information.

Procedure
  1. Add the disconnected registry node’s certificate to the install-config.yaml file. The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces.

    $ echo "additionalTrustBundle: |" >> install-config.yaml
    $ sed -e 's/^/  /' /opt/registry/certs/domain.crt >> install-config.yaml
  2. Add the mirror information for the registry to the install-config.yaml file.

    $ echo "imageContentSources:" >> install-config.yaml
    $ echo "- mirrors:" >> install-config.yaml
    $ echo "  - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml
    $ echo "  source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml
    $ echo "- mirrors:" >> install-config.yaml
    $ echo "  - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml
    $ echo "  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml
    Replace registry.example.com with the registry’s fully qualified domain name.

Deploying routers on worker nodes

During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If the initial cluster has only one worker node, or if a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas.

By default, the installer deploys two routers. If the cluster has at least two worker nodes, you can skip this section.

If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default. If the cluster has no worker nodes, you can skip this section.

Procedure
  1. Create a router-replicas.yaml file.

    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: default
      namespace: openshift-ingress-operator
    spec:
      replicas: <num-of-router-pods>
      endpointPublishingStrategy:
        type: HostNetwork
      nodePlacement:
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/worker: ""

    Replace <num-of-router-pods> with an appropriate value. If working with just one worker node, set replicas: to 1. If working with more than 3 worker nodes, you can increase replicas: from the default value 2 as appropriate.

  2. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory.

    cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml

Validation checklist for installation

  • OpenShift Container Platform installer has been retrieved.

  • OpenShift Container Platform installer has been extracted.

  • Required parameters for the install-config.yaml have been configured.

  • The hosts parameter for the install-config.yaml has been configured.

  • The bmc parameter for the install-config.yaml has been configured.

  • Conventions for the values configured in the bmc address field have been applied.

  • Created a disconnected registry (optional).

  • (optional) Validate disconnected registry settings if in use.

  • (optional) Deployed routers on worker nodes.

Deploying the cluster via the OpenShift Container Platform installer

Run the OpenShift Container Platform installer:

$ ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster

Following the installation

During the deployment process, you can check the installation’s overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder.

$ tail -f /path/to/install-dir/.openshift_install.log

Preparing to reinstall a cluster on bare metal

Before you reinstall a cluster on bare metal, you must perform cleanup operations.

Procedure
  1. Remove or reformat the disks for the bootstrap, control plane (also known as master) node, and worker nodes. If you are working in a hypervisor environment, you must add any disks you removed.

  2. Delete the artifacts that the previous installation generated:

    $ cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \
    .openshift_install.log .openshift_install_state.json
  3. Generate new manifests and Ignition config files. See “Creating the Kubernetes manifest and Ignition config files" for more information.

  4. Upload the new bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. This will overwrite the previous Ignition files.