$ oc adm release info -o jsonpath="{ .metadata.metadata}"
To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see Installing a cluster on Azure with customizations.
You can also migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines. For more information, see Migrating to a cluster with multi-architecture compute machines.
After creating a multi-architecture cluster, you can add nodes with different architectures to the cluster.
Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.
You installed the OpenShift CLI (oc
).
Log in to the OpenShift CLI (oc
).
You can check that your cluster uses the architecture payload by running the following command:
$ oc adm release info -o jsonpath="{ .metadata.metadata}"
If you see the following output, your cluster is using the multi-architecture payload:
{
"release.openshift.io/architecture": "multi",
"url": "https://access.redhat.com/errata/<errata_version>"
}
You can then begin adding multi-arch compute nodes to your cluster.
If you see the following output, your cluster is not using the multi-architecture payload:
{
"url": "https://access.redhat.com/errata/<errata_version>"
}
To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines. |
The following procedure describes how to manually generate a 64-bit ARM boot image.
You installed the Azure CLI (az
).
You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary.
Log in to your Azure account:
$ az login
Create a storage account and upload the aarch64
virtual hard disk (VHD) to your storage account. The OpenShift Container Platform installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group:
$ az storage account create -n ${STORAGe_ACCOUNT_NAMe} -g ${ReSOURCe_GROUP} -l westus --sku Standard_LRS (1)
1 | The westus object is an example region. |
Create a storage container using the storage account you generated:
$ az storage container create -n ${CONTAINeR_NAMe} --account-name ${STORAGe_ACCOUNT_NAMe}
You must use the OpenShift Container Platform installation program JSON file to extract the URL and aarch64
VHD name:
extract the URL
field and set it to RHCOS_VHD_ORIGIN_URL
as the file name by running the following command:
$ RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url')
extract the aarch64
VHD name and set it to BLOB_NAMe
as the file name by running the following command:
$ BLOB_NAMe=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd
Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container with the following commands:
$ end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
$ sas=`az storage container generate-sas -n ${CONTAINeR_NAMe} --account-name ${STORAGe_ACCOUNT_NAMe} --https-only --permissions dlrw --expiry $end -o tsv`
Copy the RHCOS VHD into the storage container:
$ az storage blob copy start --account-name ${STORAGe_ACCOUNT_NAMe} --sas-token "$sas" \
--source-uri "${RHCOS_VHD_ORIGIN_URL}" \
--destination-blob "${BLOB_NAMe}" --destination-container ${CONTAINeR_NAMe}
You can check the status of the copying process with the following command:
$ az storage blob show -c ${CONTAINeR_NAMe} -n ${BLOB_NAMe} --account-name ${STORAGe_ACCOUNT_NAMe} | jq .properties.copy
{
"completionTime": null,
"destinationSnapshot": null,
"id": "1fd97630-03ca-489a-8c4e-cfe839c9627d",
"incrementalCopy": null,
"progress": "17179869696/17179869696",
"source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd",
"status": "success", (1)
"statusDescription": null
}
1 | If the status parameter displays the success object, the copying process is complete. |
Create an image gallery using the following command:
$ az sig create --resource-group ${ReSOURCe_GROUP} --gallery-name ${GALLeRY_NAMe}
Use the image gallery to create an image definition. In the following example command, rhcos-arm64
is the name of the image definition.
$ az sig image-definition create --resource-group ${ReSOURCe_GROUP} --gallery-name ${GALLeRY_NAMe} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2
To get the URL of the VHD and set it to RHCOS_VHD_URL
as the file name, run the following command:
$ RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGe_ACCOUNT_NAMe} -c ${CONTAINeR_NAMe} -n "${BLOB_NAMe}" -o tsv)
Use the RHCOS_VHD_URL
file, your storage account, resource group, and image gallery to create an image version. In the following example, 1.0.0
is the image version.
$ az sig image-version create --resource-group ${ReSOURCe_GROUP} --gallery-name ${GALLeRY_NAMe} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGe_ACCOUNT_NAMe} --os-vhd-uri ${RHCOS_VHD_URL}
Your arm64
boot image is now generated. You can access the ID of your image with the following command:
$ az sig image-version show -r $GALLeRY_NAMe -g $ReSOURCe_GROUP -i rhcos-arm64 -e 1.0.0
The following example image ID is used in the recourseID
parameter of the compute machine set:
resourceID
/resourceGroups/${ReSOURCe_GROUP}/providers/Microsoft.Compute/galleries/${GALLeRY_NAMe}/images/rhcos-arm64/versions/1.0.0
The following procedure describes how to manually generate a 64-bit x86 boot image.
You installed the Azure CLI (az
).
You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary.
Log in to your Azure account by running the following command:
$ az login
Create a storage account and upload the x86_64
virtual hard disk (VHD) to your storage account by running the following command. The OpenShift Container Platform installation program creates a resource group. However, the boot image can also be uploaded to a custom named resource group:
$ az storage account create -n ${STORAGe_ACCOUNT_NAMe} -g ${ReSOURCe_GROUP} -l westus --sku Standard_LRS (1)
1 | The westus object is an example region. |
Create a storage container using the storage account you generated by running the following command:
$ az storage container create -n ${CONTAINeR_NAMe} --account-name ${STORAGe_ACCOUNT_NAMe}
Use the OpenShift Container Platform installation program JSON file to extract the URL and x86_64
VHD name:
extract the URL
field and set it to RHCOS_VHD_ORIGIN_URL
as the file name by running the following command:
$ RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64."rhel-coreos-extensions"."azure-disk".url')
extract the x86_64
VHD name and set it to BLOB_NAMe
as the file name by running the following command:
$ BLOB_NAMe=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64."rhel-coreos-extensions"."azure-disk".release')-azure.x86_64.vhd
Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container by running the following commands:
$ end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
$ sas=`az storage container generate-sas -n ${CONTAINeR_NAMe} --account-name ${STORAGe_ACCOUNT_NAMe} --https-only --permissions dlrw --expiry $end -o tsv`
Copy the RHCOS VHD into the storage container by running the following command:
$ az storage blob copy start --account-name ${STORAGe_ACCOUNT_NAMe} --sas-token "$sas" \
--source-uri "${RHCOS_VHD_ORIGIN_URL}" \
--destination-blob "${BLOB_NAMe}" --destination-container ${CONTAINeR_NAMe}
You can check the status of the copying process by running the following command:
$ az storage blob show -c ${CONTAINeR_NAMe} -n ${BLOB_NAMe} --account-name ${STORAGe_ACCOUNT_NAMe} | jq .properties.copy
{
"completionTime": null,
"destinationSnapshot": null,
"id": "1fd97630-03ca-489a-8c4e-cfe839c9627d",
"incrementalCopy": null,
"progress": "17179869696/17179869696",
"source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd",
"status": "success", (1)
"statusDescription": null
}
1 | If the status parameter displays the success object, the copying process is complete. |
Create an image gallery by running the following command:
$ az sig create --resource-group ${ReSOURCe_GROUP} --gallery-name ${GALLeRY_NAMe}
Use the image gallery to create an image definition by running the following command:
$ az sig image-definition create --resource-group ${ReSOURCe_GROUP} --gallery-name ${GALLeRY_NAMe} --gallery-image-definition rhcos-x86_64 --publisher RedHat --offer x86_64 --sku x86_64 --os-type linux --architecture x64 --hyper-v-generation V2
In this example command, rhcos-x86_64
is the name of the image definition.
To get the URL of the VHD and set it to RHCOS_VHD_URL
as the file name, run the following command:
$ RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGe_ACCOUNT_NAMe} -c ${CONTAINeR_NAMe} -n "${BLOB_NAMe}" -o tsv)
Use the RHCOS_VHD_URL
file, your storage account, resource group, and image gallery to create an image version by running the following command:
$ az sig image-version create --resource-group ${ReSOURCe_GROUP} --gallery-name ${GALLeRY_NAMe} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGe_ACCOUNT_NAMe} --os-vhd-uri ${RHCOS_VHD_URL}
In this example, 1.0.0
is the image version.
Optional: Access the ID of the generated x86_64
boot image by running the following command:
$ az sig image-version show -r $GALLeRY_NAMe -g $ReSOURCe_GROUP -i rhcos-x86_64 -e 1.0.0
The following example image ID is used in the recourseID
parameter of the compute machine set:
resourceID
/resourceGroups/${ReSOURCe_GROUP}/providers/Microsoft.Compute/galleries/${GALLeRY_NAMe}/images/rhcos-x86_64/versions/1.0.0
After creating a multi-architecture cluster, you can add nodes with different architectures.
You can add multi-architecture compute machines to a multi-architecture cluster in the following ways:
Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture.
Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture.
To create a custom compute machine set on Azure, see "Creating a compute machine set on Azure".
Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a |
You installed the OpenShift CLI (oc
).
You created a 64-bit ARM or 64-bit x86 boot image.
You used the installation program to create a 64-bit ARM or 64-bit x86 single-architecture Azure cluster with the multi-architecture installer binary.
Log in to the OpenShift CLI (oc
).
Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster.
MachineSet
object for an Azure 64-bit ARM or 64-bit x86 compute nodeapiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
name: <infrastructure_id>-machine-set-0
namespace: openshift-machine-api
spec:
replicas: 2
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0
spec:
lifecycleHooks: {}
metadata: {}
providerSpec:
value:
acceleratedNetworking: true
apiVersion: machine.openshift.io/v1beta1
credentialsSecret:
name: azure-cloud-credentials
namespace: openshift-machine-api
image:
offer: ""
publisher: ""
resourceID: /resourceGroups/${ReSOURCe_GROUP}/providers/Microsoft.Compute/galleries/${GALLeRY_NAMe}/images/rhcos-arm64/versions/1.0.0 (1)
sku: ""
version: ""
kind: AzureMachineProviderSpec
location: <region>
managedIdentity: <infrastructure_id>-identity
networkResourceGroup: <infrastructure_id>-rg
osDisk:
diskSettings: {}
diskSizeGB: 128
managedDisk:
storageAccountType: Premium_LRS
osType: Linux
publicIP: false
publicLoadBalancer: <infrastructure_id>
resourceGroup: <infrastructure_id>-rg
subnet: <infrastructure_id>-worker-subnet
userDataSecret:
name: worker-user-data
vmSize: Standard_D4ps_v5 (2)
vnet: <infrastructure_id>-vnet
zone: "<zone>"
1 | Set the resourceID parameter to either arm64 or amd64 boot image. |
2 | Set the vmSize parameter to the instance type used in your installation. Some example instance types are Standard_D4ps_v5 or D8ps . |
Create the compute machine set by running the following command:
$ oc create -f <file_name> (1)
1 | Replace <file_name> with the name of the YAML file with compute machine set configuration. For example: arm64-machine-set-0.yaml , or amd64-machine-set-0.yaml . |
Verify that the new machines are running by running the following command:
$ oc get machineset -n openshift-machine-api
The output must include the machine set that you created.
NAMe DeSIReD CURReNT ReADY AVAILABLe AGe
<infrastructure_id>-machine-set-0 2 2 2 2 10m
You can check if the nodes are ready and schedulable by running the following command:
$ oc get nodes