example.com
Installing OpenShift Container Platform on a single node requires a discovery ISO, which the Assisted Installer (AI) can generate with the cluster name, base domain, Secure Shell (SSH) public key, and pull secret.
On the administration node, open a browser and navigate to Install OpenShift with the Assisted Installer.
Click Create Cluster to create a new cluster.
In the Cluster name field, enter a name for the cluster.
In the Base domain field, enter a base domain. For example:
example.com
All DNS records must be subdomains of this base domain and include the cluster name. You cannot change the base domain after cluster installation. For example:
<cluster-name>.example.com
Select Install single node OpenShift (SNO).
Read the 4.9 release notes, which outline some of the limitations for installing OpenShift Container Platform on a single node.
Select the OpenShift Container Platform version.
Optional: Edit the pull secret.
Click Next.
Click Generate Discovery ISO.
Select Full image file to boot with a USB drive or PXE. Select Minimal image file to boot with virtual media.
Add SSH public key of the administration node to the Public key field.
Click Generate Discovery ISO.
Download the discovery ISO.
Make a note of the discovery ISO URL for installing with virtual media.
Installing OpenShift Container Platform on a single node requires a discovery ISO, which you can generate with the following procedure.
Download the OpenShift Container Platform client (oc
) and make it available for use by entering the following command:
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz > oc.tar.gz
$ tar zxf oc.tar.gz
$ chmod +x oc
Set the OpenShift Container Platform version:
$ OCP_VERSION=<ocp_version> (1)
1 | Replace <ocp_version> with the current version. For example. latest-4.9 |
Download the OpenShift Container Platform installer and make it available for use by entering the following commands:
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz
$ tar zxvf openshift-install-linux.tar.gz
$ chmod +x openshift-install
Retrieve the RHCOS ISO URL:
$ ISO_URL=$(./openshift-install coreos print-stream-json | grep location | grep x86_64 | grep iso | cut -d\" -f4)
Download the RHCOS ISO:
$ curl -L $ISO_URL > rhcos-live.x86_64.iso
Prepare the install-config.yaml
file:
apiVersion: v1
baseDomain: <domain> (1)
compute:
- name: worker
replicas: 0 (2)
controlPlane:
name: master
replicas: 1 (3)
metadata:
name: <name> (4)
networking:
networkType: OVNKubernetes
clusterNetwork:
- cidr: <IP_address>/<prefix> (5)
hostPrefix: <prefix> (6)
serviceNetwork:
- <IP_address>/<prefix> (7)
platform:
none: {}
bootstrapInPlace:
installationDisk: <path_to_install_drive> (8)
pullsecret: '<pull_secret>' (9)
sshKey: |
<ssh_key> (10)
1 | Add the cluster domain name. |
2 | Set the compute replicas to 0 . This makes the control plane node schedulable. |
3 | Set the controlPlane replicas to 1 . In conjunction with the previous compute setting, this setting ensures the cluster runs on a single node. |
4 | Set the metadata name to the cluster name. |
5 | Set the clusterNetwork CIDR. |
6 | Set the clusterNetwork host prefix. Pods receive their IP addresses from this pool. |
7 | Set the serviceNetwork CIDR. Services receive their IP addresses from this pool. |
8 | Set the path to the installation disk drive. |
9 | Copy the pull secret from the Red Hat OpenShift Cluster Manager. In step 1, click Download pull secret and add the contents to this configuration setting. |
10 | Add the public SSH key from the administration host so that you can log in to the cluster after installation. |
Generate OpenShift Container Platform assets:
$ mkdir ocp
$ cp install-config.yaml ocp
$ ./openshift-install --dir=ocp create single-node-ignition-config
Embed the ignition data into the RHCOS ISO:
$ alias coreos-installer='podman run --privileged --pull always --rm \
-v /dev:/dev -v /run/udev:/run/udev -v $PWD:/data \
-w /data quay.io/coreos/coreos-installer:release'
$ cp ocp/bootstrap-in-place-for-live-iso.ign iso.ign
$ coreos-installer iso ignition embed -fi iso.ign rhcos-live.x86_64.iso
Installing with USB media involves creating a bootable USB drive with the discovery ISO on the administration node. Booting the server with the USB drive prepares the node for a single node installation.
On the administration node, insert a USB drive into a USB port.
Create a bootable USB drive:
# dd if=<path-to-iso> of=<path/to/usb> status=progress
For example:
# dd if=discovery_image_sno.iso of=/dev/sdb status=progress
After the ISO is copied to the USB drive, you can use the USB drive to install OpenShift Container Platform.
On the server, insert the USB drive into a USB port.
Reboot the server and enter the BIOS settings upon reboot.
Change boot drive order to make the USB drive boot first.
Save and exit the BIOS settings. The server will boot with the discovery image.
If you created the ISO using the Assisted Installer, use this procedure to monitor the installation.
On the administration host, return to the browser and refresh the page. If necessary, reload the Install OpenShift with the Assisted Installer page and select the cluster name.
Click Next until you reach step 3, Networking.
Select a subnet from the available subnets.
Keep Use the same host discovery SSH key checked. You can change the SSH public key, if necessary.
Click Next to proceed to the Review and Create step.
Click Install cluster.
Monitor the installation’s progress. Watch the cluster events. After the installation process finishes writing the discovery image to the server’s drive, the server will restart. Remove the USB drive and reset the BIOS to boot to the server’s local media rather than the USB drive.
The server will restart several times, deploying the control plane.
If you created the ISO manually, use this procedure to monitor the installation.
Monitor the installation:
$ ./openshift-install --dir=ocp wait-for install-complete
The server will restart several times while deploying the control plane.
Optional: After the installation is complete, check the environment:
$ export KUBECONFIG=ocp/auth/kubeconfig
$ oc get nodes
$ oc get clusterversion