<cluster_name>.<domain-name>
Installer-provisioned installation of OpenShift Container Platform requires:
One provisioner node with Red Hat Enterprise Linux (RHEL) 8.x installed.
Three control plane nodes.
Baseboard Management Controller (BMC) access to each node.
At least one network:
One required routable network
One optional network for provisioning nodes; and,
One optional management network.
Before starting an installer-provisioned installation of OpenShift Container Platform, ensure the hardware environment meets the following requirements.
Installer-provisioned installation involves a number of hardware node requirements:
CPU architecture: All nodes must use x86_64
CPU architecture.
Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration.
Baseboard Management Controller: The provisioner
node must be able to access the baseboard management controller (BMC) of each OpenShift Container Platform cluster node. You may use IPMI, Redfish, or a proprietary protocol.
Latest generation: Nodes must be of the most recent generation. Because the installer-provisioned installation relies on BMC protocols, the hardware must support IPMI cipher suite 17.
Additionally, RHEL 8 ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 8 for the provisioner
node and RHCOS 8 for the control plane and worker nodes.
Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node.
Provisioner node: Installer-provisioned installation requires one provisioner
node.
Control plane: Installer-provisioned installation requires three control plane nodes for high availability.
Worker nodes: While not required, a typical production cluster has one or more worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing.
Network interfaces: Each node must have at least one network interface for the routable baremetal
network. Each node must have one network interface for a provisioning
network when using the provisioning
network for deployment. Using the provisioning
network is the default configuration. Network interface naming must be consistent across control plane nodes for the provisioning network. For example, if a control plane node uses the eth0
NIC for the provisioning network, the other control plane nodes must use it as well.
Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OpenShift Container Platform nodes when using IPv6 addressing on the provisioning
network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the provisioning
network NIC, but omitting the provisioning
network removes this requirement.
When starting the installation from virtual media such as an ISO image, delete all old UEFI boot table entries. If the boot table includes entries that are not generic entries provided by the firmware, the installation might fail. |
Secure Boot: Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. To deploy an OpenShift Container Platform cluster with Secure Boot, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot only when installer-provisioned installations use Red Fish Virtual Media. Red Hat does not support Secure Boot with self-generated keys.
The installer for installer-provisioned OpenShift Container Platform clusters validates the hardware and firmware compatibility with Redfish virtual media. The following table lists supported firmware for installer-provisioned OpenShift Container Platform clusters deployed with Redfish virtual media.
Hardware | Model | Management | Firmware Versions |
---|---|---|---|
HP |
10th Generation |
iLO5 |
N/A |
Dell |
14th Generation |
iDRAC 9 |
v4.20.20.20 - 04.40.00.00 |
13th Generation |
iDRAC 8 |
v2.75.75.75+ |
Refer to the hardware documentation for the nodes or contact the hardware vendor for information on updating the firmware. For HP servers, Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. For Dell servers, ensure the OpenShift Container Platform cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration → Virtual Media → Attach Mode → AutoAttach . With iDRAC 9 firmware version |
The installer will not initiate installation on a node if the node firmware is below the foregoing versions when installing with virtual media. |
Installer-provisioned installation of OpenShift Container Platform involves several network requirements. First, installer-provisioned installation involves an optional non-routable provisioning
network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable baremetal
network.
OpenShift Container Platform deploys with two networks:
provisioning
: The provisioning
network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. When deploying using the provisioning
network, the first NIC on each node, such as eth0
or eno1
,
must interface with the provisioning
network.
baremetal
: The baremetal
network is a routable network. When deploying using the provisioning
network, the second NIC on each node, such as eth1
or eno2
, must interface with the baremetal
network. When deploying without a provisioning
network, you can use any NIC on each node to interface with the baremetal
network.
Each NIC should be on a separate VLAN corresponding to the appropriate network. |
Clients access the OpenShift Container Platform cluster nodes over the baremetal
network.
A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name.
<cluster_name>.<domain-name>
For example:
test-cluster.example.com
OpenShift Container Platform includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. Once the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
By default, installer-provisioned installation deploys ironic-dnsmasq
with DHCP enabled for the provisioning
network. No other DHCP servers should be running on the provisioning
network when the provisioningNetwork
configuration setting is set to managed
, which is the default value. If you have a DHCP server running on the provisioning
network, you must set the provisioningNetwork
configuration setting to unmanaged
in the install-config.yaml
file.
Network administrators must reserve IP addresses for each node in the OpenShift Container Platform cluster for the baremetal
network on an external DHCP server.
For the baremetal
network, a network administrator must reserve a number of IP addresses to ensure that they do not change after deployment, including:
Two virtual IP addresses:
One IP address for the API endpoint.
One IP address for the wildcard ingress endpoint.
One IP address for the provisioner node.
One IP address for each control plane (master) node.
One IP address for each worker node.
Reserving IP addresses so they become static IP addresses
Some administrators prefer to use static IP addresses so that each node’s IP address remains constant in the absence of a DHCP server. To use static IP addresses in the OpenShift Container Platform cluster, reserve the IP addresses with an infinite lease. During deployment, the installer will reconfigure the NICs from DHCP assigned addresses to static IP addresses. NICs with DHCP leases that are not infinite will remain configured to use DHCP. Setting IP addresses with an infinite lease is incompatible with network configuration deployed by using the Machine Config Operator. |
Ensuring that your DHCP server can provide infinite leases
Your DHCP server must provide a DHCP expiration time of 4294967295 seconds to properly set an infinite lease as specified by rfc2131. If a lesser value is returned for the DHCP infinite lease time, the node reports an error and a permanent IP is not set for the node. In RHEL 8, |
Do not change IP addresses manually after deployment
Do not change a worker node’s IP address manually after deployment. To change the IP address of a worker node after deployment, you must mark the worker node unschedulable, evacuate the pods, delete the node, and recreate it with the new IP address. See "Working with nodes" for additional details. To change the IP address of a control plane node after deployment, contact support. The storage interface requires a DHCP reservation. |
The following table provides an exemplary embodiment of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer.
Usage | Host Name | IP |
---|---|---|
API |
api.<cluster_name>.<domain> |
<ip> |
Ingress LB (apps) |
*.apps.<cluster_name>.<domain> |
<ip> |
Provisioner node |
provisioner.<cluster_name>.<domain> |
<ip> |
master-0 |
openshift-master-0.<cluster_name>.<domain> |
<ip> |
master-1 |
openshift-master-1.<cluster_name>.<domain> |
<ip> |
master-2 |
openshift-master-2.<cluster_name>.<domain> |
<ip> |
Worker-0 |
openshift-worker-0.<cluster_name>.<domain> |
<ip> |
Worker-1 |
openshift-worker-1.<cluster_name>.<domain> |
<ip> |
Worker-n |
openshift-worker-n.<cluster_name>.<domain> |
<ip> |
Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync.
Define a consistent clock date and time format in each cluster node’s BIOS settings, or installation might fail. |
You may reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes.
OpenShift Container Platform supports additional post-installation state-driven network configuration on the secondary network interfaces of cluster nodes using kubernetes-nmstate
. For example, system administrators might configure a secondary network interface on cluster nodes after installation for a storage network.
Configuration must occur before scheduling pods. |
State-driven network configuration requires installing kubernetes-nmstate
, and also requires Network Manager running on the cluster nodes. See OpenShift Virtualization > Kubernetes NMState (Tech Preview) for additional details.
The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the baremetal
node during installation, the out-of-band management IP address address must be granted access to the TCP 6180 port.
provisioning
networkEach node in the cluster requires the following configuration for proper installation.
A mismatch between nodes will cause an installation failure. |
While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs:
NIC |
Network |
VLAN |
NIC1 |
|
<provisioning-vlan> |
NIC2 |
|
<baremetal-vlan> |
NIC1 is a non-routable network (provisioning
) that is only used for the installation of the OpenShift Container Platform cluster.
The Red Hat Enterprise Linux (RHEL) 8.x installation process on the provisioner node might vary. To install Red Hat Enterprise Linux (RHEL) 8.x using a local Satellite server or a PXE server, PXE-enable NIC2.
PXE |
Boot order |
NIC1 PXE-enabled |
1 |
NIC2 |
2 |
Ensure PXE is disabled on all other NICs. |
Configure the control plane and worker nodes as follows:
PXE |
Boot order |
NIC1 PXE-enabled (provisioning network) |
1 |
provisioning
networkThe installation process requires one NIC:
NIC |
Network |
VLAN |
NICx |
|
<baremetal-vlan> |
NICx is a routable network (baremetal
) that is used for the installation of the OpenShift Container Platform cluster, and routable to the internet.
Secure Boot prevents a node from booting unless it verifies the node is using only trusted software, such as UEFI firmware drivers, EFI applications and the operating system. Red Hat only supports Secure Boot when deploying with RedFish Virtual Media.
To enable Secure Boot, refer to the hardware guide for the node. To enable Secure Boot, execute the following:
Boot the node and enter the BIOS menu.
Set the node’s boot mode to UEFI Enabled.
Enable Secure Boot.
Red Hat does not support Secure Boot with self-generated keys. |
Nodes will typically have an additional NIC used by the Baseboard Management Controllers (BMCs). These BMCs must be accessible from the provisioner
node.
Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner
node requires access to the out-of-band management network for a successful OpenShift Container Platform 4 installation.
The out-of-band management setup is out of scope for this document. We recommend setting up a separate management network for out-of-band management. However, using the provisioning
network or the baremetal
network are valid options.
Prior to the installation of the OpenShift Container Platform cluster, gather the following information from all cluster nodes:
Out-of-band management IP
Examples
Dell (iDRAC) IP
HP (iLO) IP
Fujitsu (iRMC) IP
provisioning
networkNIC1 (provisioning
) MAC address
NIC2 (baremetal
) MAC address
provisioning
networkNICx (baremetal
) MAC address
provisioning
network NIC1 VLAN is configured for the provisioning
network. (optional)
NIC1 is PXE-enabled on the provisioner, control plane (master), and worker nodes when using a provisioning
network. (optional)
NIC2 VLAN is configured for the baremetal
network.
PXE has been disabled on all other NICs.
Control plane and worker nodes are configured.
All nodes accessible via out-of-band management.
A separate management network has been created. (optional)
Required data for installation.
provisioning
network NICx VLAN is configured for the baremetal
network.
Control plane and worker nodes are configured.
All nodes accessible via out-of-band management.
A separate management network has been created. (optional)
Required data for installation.