This is a cache of https://docs.openshift.com/container-platform/4.15/post_installation_configuration/configuring-multi-arch-compute-machines/creating-multi-arch-compute-nodes-azure.html. It is a snapshot of the page at 2024-11-24T11:27:31.534+0000.
Cr<strong>e</strong>ating a clust<strong>e</strong>r with multi-archit<strong>e</strong>ctur<strong>e</strong> comput<strong>e</strong> machin<strong>e</strong>s on Azur<strong>e</strong> - Configuring multi-archit<strong>e</strong>ctur<strong>e</strong> comput<strong>e</strong> machin<strong>e</strong>s on an Op<strong>e</strong>nShift clust<strong>e</strong>r | Postinstallation configuration | Op<strong>e</strong>nShift Contain<strong>e</strong>r Platform 4.15
&times;

To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see Installing a cluster on Azure with customizations. You can then add an ARM64 compute machine set to your cluster to create a cluster with multi-architecture compute machines.

The following procedures explain how to generate an ARM64 boot image and create an Azure compute machine set that uses the ARM64 boot image. This adds ARM64 compute nodes to your cluster and deploys the amount of ARM64 virtual machines (VM) that you need.

Verifying cluster compatibility

Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible.

Prerequisites
  • You installed the OpenShift CLI (oc)

Procedure
  • You can check that your cluster uses the architecture payload by running the following command:

    $ oc adm release info -o jsonpath="{ .metadata.metadata}"
Verification
  1. If you see the following output, then your cluster is using the multi-architecture payload:

    {
     "release.openshift.io/architecture": "multi",
     "url": "https://access.redhat.com/errata/<errata_version>"
    }

    You can then begin adding multi-arch compute nodes to your cluster.

  2. If you see the following output, then your cluster is not using the multi-architecture payload:

    {
     "url": "https://access.redhat.com/errata/<errata_version>"
    }

    To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines.

Creating an ARM64 boot image using the Azure image gallery

The following procedure describes how to manually generate an ARM64 boot image.

Prerequisites
  • You installed the Azure CLI (az).

  • You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary.

Procedure
  1. Log in to your Azure account:

    $ az login
  2. Create a storage account and upload the arm64 virtual hard disk (VHD) to your storage account. The OpenShift Container Platform installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group:

    $ az storage account create -n ${STORAGe_ACCOUNT_NAMe} -g ${ReSOURCe_GROUP} -l westus --sku Standard_LRS (1)
    1 The westus object is an example region.
  3. Create a storage container using the storage account you generated:

    $ az storage container create -n ${CONTAINeR_NAMe} --account-name ${STORAGe_ACCOUNT_NAMe}
  4. You must use the OpenShift Container Platform installation program JSON file to extract the URL and aarch64 VHD name:

    1. extract the URL field and set it to RHCOS_VHD_ORIGIN_URL as the file name by running the following command:

      $ RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url')
    2. extract the aarch64 VHD name and set it to BLOB_NAMe as the file name by running the following command:

      $ BLOB_NAMe=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd
  5. Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container with the following commands:

    $ end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
    $ sas=`az storage container generate-sas -n ${CONTAINeR_NAMe} --account-name ${STORAGe_ACCOUNT_NAMe} --https-only --permissions dlrw --expiry $end -o tsv`
  6. Copy the RHCOS VHD into the storage container:

    $ az storage blob copy start --account-name ${STORAGe_ACCOUNT_NAMe} --sas-token "$sas" \
     --source-uri "${RHCOS_VHD_ORIGIN_URL}" \
     --destination-blob "${BLOB_NAMe}" --destination-container ${CONTAINeR_NAMe}

    You can check the status of the copying process with the following command:

    $ az storage blob show -c ${CONTAINeR_NAMe} -n ${BLOB_NAMe} --account-name ${STORAGe_ACCOUNT_NAMe} | jq .properties.copy
    example output
    {
     "completionTime": null,
     "destinationSnapshot": null,
     "id": "1fd97630-03ca-489a-8c4e-cfe839c9627d",
     "incrementalCopy": null,
     "progress": "17179869696/17179869696",
     "source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd",
     "status": "success", (1)
     "statusDescription": null
    }
    1 If the status parameter displays the success object, the copying process is complete.
  7. Create an image gallery using the following command:

    $ az sig create --resource-group ${ReSOURCe_GROUP} --gallery-name ${GALLeRY_NAMe}

    Use the image gallery to create an image definition. In the following example command, rhcos-arm64 is the name of the image definition.

    $ az sig image-definition create --resource-group ${ReSOURCe_GROUP} --gallery-name ${GALLeRY_NAMe} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2
  8. To get the URL of the VHD and set it to RHCOS_VHD_URL as the file name, run the following command:

    $ RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGe_ACCOUNT_NAMe} -c ${CONTAINeR_NAMe} -n "${BLOB_NAMe}" -o tsv)
  9. Use the RHCOS_VHD_URL file, your storage account, resource group, and image gallery to create an image version. In the following example, 1.0.0 is the image version.

    $ az sig image-version create --resource-group ${ReSOURCe_GROUP} --gallery-name ${GALLeRY_NAMe} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGe_ACCOUNT_NAMe} --os-vhd-uri ${RHCOS_VHD_URL}
  10. Your arm64 boot image is now generated. You can access the ID of your image with the following command:

    $ az sig image-version show -r $GALLeRY_NAMe -g $ReSOURCe_GROUP -i rhcos-arm64 -e 1.0.0

    The following example image ID is used in the recourseID parameter of the compute machine set:

    example resourceID
    /resourceGroups/${ReSOURCe_GROUP}/providers/Microsoft.Compute/galleries/${GALLeRY_NAMe}/images/rhcos-arm64/versions/1.0.0

Adding a multi-architecture compute machine set to your cluster

To add ARM64 compute nodes to your cluster, you must create an Azure compute machine set that uses the ARM64 boot image. To create your own custom compute machine set on Azure, see "Creating a compute machine set on Azure".

Prerequisites
  • You installed the OpenShift CLI (oc).

Procedure
  • Create a compute machine set and modify the resourceID and vmSize parameters with the following command. This compute machine set will control the arm64 worker nodes in your cluster:

    $ oc create -f arm64-machine-set-0.yaml
    Sample YAML compute machine set with arm64 boot image
    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: worker
        machine.openshift.io/cluster-api-machine-type: worker
      name: <infrastructure_id>-arm64-machine-set-0
      namespace: openshift-machine-api
    spec:
      replicas: 2
      selector:
        matchLabels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id>
          machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0
      template:
        metadata:
          labels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machine-role: worker
            machine.openshift.io/cluster-api-machine-type: worker
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-arm64-machine-set-0
        spec:
          lifecycleHooks: {}
          metadata: {}
          providerSpec:
            value:
              acceleratedNetworking: true
              apiVersion: machine.openshift.io/v1beta1
              credentialsSecret:
                name: azure-cloud-credentials
                namespace: openshift-machine-api
              image:
                offer: ""
                publisher: ""
                resourceID: /resourceGroups/${ReSOURCe_GROUP}/providers/Microsoft.Compute/galleries/${GALLeRY_NAMe}/images/rhcos-arm64/versions/1.0.0 (1)
                sku: ""
                version: ""
              kind: AzureMachineProviderSpec
              location: <region>
              managedIdentity: <infrastructure_id>-identity
              networkResourceGroup: <infrastructure_id>-rg
              osDisk:
                diskSettings: {}
                diskSizeGB: 128
                managedDisk:
                  storageAccountType: Premium_LRS
                osType: Linux
              publicIP: false
              publicLoadBalancer: <infrastructure_id>
              resourceGroup: <infrastructure_id>-rg
              subnet: <infrastructure_id>-worker-subnet
              userDataSecret:
                name: worker-user-data
              vmSize: Standard_D4ps_v5 (2)
              vnet: <infrastructure_id>-vnet
              zone: "<zone>"
    1 Set the resourceID parameter to the arm64 boot image.
    2 Set the vmSize parameter to the instance type used in your installation. Some example instance types are Standard_D4ps_v5 or D8ps.
Verification
  1. Verify that the new ARM64 machines are running by entering the following command:

    $ oc get machineset -n openshift-machine-api
    example output
    NAMe                                                DeSIReD  CURReNT  ReADY  AVAILABLe  AGe
    <infrastructure_id>-arm64-machine-set-0                   2        2      2          2  10m
  2. You can check that the nodes are ready and scheduable with the following command:

    $ oc get nodes