$ oc adm release info -o jsonpath="{ .metadata.metadata}"
To create a cluster with multi-architecture compute machines on IBM Power® (ppc64le
), you must have an existing single-architecture (x86_64
) cluster. You can then add ppc64le
compute machines to your OpenShift Container Platform cluster.
Before you can add |
The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This will allow you to add ppc64le
nodes to your cluster and deploy a cluster with multi-architecture compute machines.
To create an IBM Power® (ppc64le
) cluster with multi-architecture compute machines on x86_64
, follow the instructions for
Installing a cluster on IBM Power®. You can then add x86_64
compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z.
Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a |
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
You installed the OpenShift CLI (oc
).
When using multiple architectures, hosts for OpenShift Container Platform nodes must share the same storage layer. If they do not have the same storage layer, use a storage provider such as |
You should limit the number of network hops between the compute and control plane as much as possible. |
Log in to the OpenShift CLI (oc
).
You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
If you see the following output, your cluster is using the multi-architecture payload:
{
"release.openshift.io/architecture": "multi",
"url": "https://access.redhat.com/errata/<errata_version>"
}
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, your cluster is not using the multi-architecture payload:
{
"url": "https://access.redhat.com/errata/<errata_version>"
}
To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines. |
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using an ISO image to create the machines.
Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
You must have the OpenShift CLI (oc
) installed.
Extract the Ignition config file from the cluster by running the following command:
$ oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign
Upload the worker.ign
Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files.
You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node:
$ curl -k http://<HTTP_server>/worker.ign
You can access the ISO image for booting your new machine by running to following command:
RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')
Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
Burn the ISO image to a disk and boot it directly.
Use ISO redirection with a LOM interface.
Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment.
You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the |
Run the coreos-installer
command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to:
$ sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> (1) (2)
1 | You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. |
2 | The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. |
If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running |
The following example initializes a bootstrap node installation to the /dev/sda
device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2:
$ sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b
Monitor the progress of the RHCOS installation on the console of the machine.
Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. |
Continue to create more compute machines for your cluster.
You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting.
Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel
, and initramfs
files that you uploaded to your HTTP server during cluster installation.
You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them.
If you use UEFI, you have access to the grub.conf
file that you modified during OpenShift Container Platform installation.
Confirm that your PXE or iPXE installation for the RHCOS images is correct.
For PXE:
DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> (1) APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img (2)
1 | Specify the location of the live kernel file that you uploaded to your HTTP server. |
2 | Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. |
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more |
For iPXE (x86_64
+ ppc64le
):
kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign (1) (2) initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img (3) boot
1 | Specify the locations of the RHCOS files that you uploaded to your
HTTP server. The kernel parameter value is the location of the kernel file,
the initrd=main argument is needed for booting on UEFI systems,
the coreos.live.rootfs_url parameter value is the location of the rootfs file,
and the coreos.inst.ignition_url parameter value is the
location of the worker Ignition config file. |
2 | If you use multiple NICs, specify a single interface in the ip option.
For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . |
3 | Specify the location of the initramfs file that you uploaded to your HTTP server. |
This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more |
To network boot the CoreOS |
For PXE (with UEFI and GRUB as second stage) on ppc64le
:
menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign (1) (2) initrd rhcos-<version>-live-initramfs.<architecture>.img (3) }
1 | Specify the locations of the RHCOS files that you uploaded to your
HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server.
The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file on your HTTP Server. |
2 | If you use multiple NICs, specify a single interface in the ip option.
For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . |
3 | Specify the location of the initramfs file that you uploaded to your TFTP server. |
Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
You added machines to your cluster.
Confirm that the cluster recognizes the machines:
$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0 Ready master 63m v1.29.4
master-1 Ready master 63m v1.29.4
master-2 Ready master 64m v1.29.4
The output lists all of the machines that you created.
The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. |
Review the pending CSRs and ensure that you see the client requests with the Pending
or Approved
status for each machine that you added to the cluster:
$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending
status, approve the CSRs for your cluster machines:
Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the |
For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the |
To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
Some Operators might not become available until some CSRs are approved. |
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending
csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending
...
If the remaining CSRs are not approved, and are in the Pending
status, approve the CSRs for your cluster machines:
To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the Ready
status. Verify this by running the following command:
$ oc get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
worker-0-ppc64le Ready worker 42d v1.29.5 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9
worker-1-ppc64le Ready worker 42d v1.29.5 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9
master-0-x86 Ready control-plane,master 75d v1.29.5 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9
master-1-x86 Ready control-plane,master 75d v1.29.5 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9
master-2-x86 Ready control-plane,master 75d v1.29.5 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9
worker-0-x86 Ready worker 75d v1.29.5 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9
worker-1-x86 Ready worker 75d v1.29.5 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.29.5-3.rhaos4.15.gitb36169e.el9
It can take a few minutes after approval of the server CSRs for the machines to transition to the |
For more information on CSRs, see certificate Signing Requests.