This is a cache of https://docs.openshift.com/container-platform/4.14/installing/installing_sno/install-sno-preparing-to-install-sno.html. It is a snapshot of the page at 2024-11-19T14:10:48.171+0000.
Preparing to install OpenShift on a single node - Installing on a single node | Installing | OpenShift Container Platform 4.14
×

Prerequisites

About OpenShift on a single node

You can create a single-node cluster with standard installation methods. OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special Ignition configuration file. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability.

The use of OpenShiftSDN with single-node OpenShift is not supported. OVN-Kubernetes is the default network plugin for single-node OpenShift deployments.

Requirements for installing OpenShift on a single node

Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the following requirements:

  • Administration host: You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation.

    For the ppc64le platform, the host should prepare the ISO, but does not need to create the USB boot drive. The ISO can be mounted to PowerVM directly.

    ISO is not required for IBM Z® installations.

  • CPU Architecture: Installing OpenShift Container Platform on a single node supports x86_64, arm64,ppc64le, and s390x CPU architectures.

  • Supported platforms: Installing OpenShift Container Platform on a single node is supported on bare metal and Certified third-party hypervisors. In most cases, you must specify the platform.none: {} parameter in the install-config.yaml configuration file. The following list shows the only exceptions and the corresponding parameter to specify in the install-config.yaml configuration file:

    • Amazon Web Services (AWS), where you use platform=aws

    • Google Cloud Platform (GCP), where you use platform=gcp

    • Microsoft Azure, where you use platform=azure

  • Production-grade server: Installing OpenShift Container Platform on a single node requires a server with sufficient resources to run OpenShift Container Platform services and a production workload.

    Table 1. Minimum resource requirements
    Profile vCPU Memory Storage

    Minimum

    8 vCPUs

    16 GB of RAM

    120 GB

    One vCPU equals one physical core. However, if you enable simultaneous multithreading (SMT), or Hyper-Threading, use the following formula to calculate the number of vCPUs that represent one physical core:

    • (threads per core × cores) × sockets = vCPUs

    • Adding Operators during the installation process might increase the minimum resource requirements.

    The server must have a Baseboard Management Controller (BMC) when booting with virtual media.

    BMC is not supported on IBM Z® and IBM Power®.

  • Networking: The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, ingress route, and cluster node domain names. You must configure the dns to resolve the IP address to each of the following fully qualified domain names (FQDN):

    Table 2. Required dns records
    Usage FQDN Description

    Kubernetes API

    api.<cluster_name>.<base_domain>

    Add a dns A/AAAA or CNAME record. This record must be resolvable by both clients external to the cluster and within the cluster.

    Internal API

    api-int.<cluster_name>.<base_domain>

    Add a dns A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster.

    Ingress route

    *.apps.<cluster_name>.<base_domain>

    Add a wildcard dns A/AAAA or CNAME record that targets the node. This record must be resolvable by both clients external to the cluster and within the cluster.

    Without persistent IP addresses, communications between the apiserver and etcd might fail.