This is a cache of https://docs.openshift.com/container-platform/4.16/installing/installing_bare_metal_ipi/ipi-install-overview.html. It is a snapshot of the page at 2024-11-26T08:55:20.989+0000.
Overview - Deploying installer-provisioned clusters on bare metal | Installing | OpenShift Container Platform 4.16
×

Installer-provisioned installation on bare metal nodes deploys and configures the infrastructure that an OpenShift Container Platform cluster runs on. This guide provides a methodology to achieving a successful installer-provisioned bare-metal installation. The following diagram illustrates the installation environment in phase 1 of deployment:

Deployment phase one

For the installation, the key elements in the previous diagram are:

  • Provisioner: A physical machine that runs the installation program and hosts the bootstrap VM that deploys the control plane of a new OpenShift Container Platform cluster.

  • Bootstrap VM: A virtual machine used in the process of deploying an OpenShift Container Platform cluster.

  • Network bridges: The bootstrap VM connects to the bare metal network and to the provisioning network, if present, via network bridges, eno1 and eno2.

  • API VIP: An API virtual IP address (VIP) is used to provide failover of the API server across the control plane nodes. The API VIP first resides on the bootstrap VM. A script generates the keepalived.conf configuration file before launching the service. The VIP moves to one of the control plane nodes after the bootstrap process has completed and the bootstrap VM stops.

In phase 2 of the deployment, the provisioner destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the appropriate nodes.

The keepalived.conf file sets the control plane machines with a lower Virtual Router Redundancy Protocol (VRRP) priority than the bootstrap VM, which ensures that the API on the control plane machines is fully functional before the API VIP moves from the bootstrap VM to the control plane. Once the API VIP moves to one of the control plane nodes, traffic sent from external clients to the API VIP routes to an haproxy load balancer running on that control plane node. This instance of haproxy load balances the API VIP traffic across the control plane nodes.

The ingress VIP moves to the compute nodes. The keepalived instance also manages the ingress VIP.

The following diagram illustrates phase 2 of deployment:

Deployment phase two

After this point, the node used by the provisioner can be removed or repurposed. From here, all additional provisioning tasks are carried out by the control plane.

For installer-provisioned infrastructure installations, CoreDNS exposes port 53 at the node level, making it accessible from other routable networks.

Additional resources

The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media baseboard management controller (BMC) addressing option such as redfish-virtualmedia or idrac-virtualmedia.