This is a cache of https://docs.openshift.com/container-platform/4.11/installing/installing_sno/install-sno-preparing-to-install-sno.html. It is a snapshot of the page at 2024-11-25T13:51:07.531+0000.
Preparing to install OpenShift on a single node - Installing on a single node | Installing | OpenShift Container Platform 4.11
×

Prerequisites

About OpenShift on a single node

You can create a single-node cluster with standard installation methods. OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special ignition configuration ISO. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability.

The use of OpenShiftSDN with single-node OpenShift is not supported. OVN-Kubernetes is the default networking solution for single-node OpenShift deployments.

Requirements for installing OpenShift on a single node

Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the following requirements:

  • Administration host: You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation.

  • Supported platforms: Installing OpenShift Container Platform on a single node is supported on bare metal and Certified third-party hypervisors. In all cases, you must specify the platform.none: {} parameter in the install-config.yaml configuration file.

  • Production-grade server: Installing OpenShift Container Platform on a single node requires a server with sufficient resources to run OpenShift Container Platform services and a production workload.

    Table 1. Minimum resource requirements
    Profile vCPU Memory Storage

    Minimum

    8 vCPU cores

    16GB of RAM

    120GB

    One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio:

    (threads per core × cores) × sockets = vCPUs

    The server must have a Baseboard Management Controller (BMC) when booting with virtual media.

  • Networking: The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN):

    Table 2. Required DNS records
    Usage FQDN Description

    Kubernetes API

    api.<cluster_name>.<base_domain>

    Add a DNS A/AAAA or CNAME record. This record must be resolvable by clients external to the cluster.

    Internal API

    api-int.<cluster_name>.<base_domain>

    Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster.

    Ingress route

    *.apps.<cluster_name>.<base_domain>

    Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by clients external to the cluster.

    Without persistent IP addresses, communications between the apiserver and etcd might fail.