$ oc get infrastructure cluster -o jsonpath='{.status.platform}'You can create a different compute machine set to serve a specific purpose in your OKD cluster on Microsoft Azure. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
| You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type  To view the platform type for your cluster, run the following command:  | 
This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with
node-role.kubernetes.io/<role>: "".
In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and
<role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    machine.openshift.io/cluster-api-machine-role: <role> (2)
    machine.openshift.io/cluster-api-machine-type: <role>
  name: <infrastructure_id>-<role>-<region> (3)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region>
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <role>
        machine.openshift.io/cluster-api-machine-type: <role>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region>
    spec:
      metadata:
        creationTimestamp: null
        labels:
          machine.openshift.io/cluster-api-machineset: <machineset_name>
          node-role.kubernetes.io/<role>: ""
      providerSpec:
        value:
          apiVersion: machine.openshift.io/v1beta1
          credentialsSecret:
            name: azure-cloud-credentials
            namespace: openshift-machine-api
          image: (4)
            offer: ""
            publisher: ""
            resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest (5)
            sku: ""
            version: ""
          internalLoadBalancer: ""
          kind: AzureMachineProviderSpec
          location: <region> (6)
          managedIdentity: <infrastructure_id>-identity
          metadata:
            creationTimestamp: null
          natRule: null
          networkResourceGroup: ""
          osDisk:
            diskSizeGB: 128
            managedDisk:
              storageAccountType: Premium_LRS
            osType: Linux
          publicIP: false
          publicLoadBalancer: ""
          resourceGroup: <infrastructure_id>-rg
          sshPrivateKey: ""
          sshPublicKey: ""
          tags:
            - name: <custom_tag_name> (7)
              value: <custom_tag_value>
          subnet: <infrastructure_id>-<role>-subnet
          userDataSecret:
            name: worker-user-data
          vmSize: Standard_D4s_v3
          vnet: <infrastructure_id>-vnet
          zone: "1" (8)| 1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift cli installed, you can obtain the infrastructure ID by running the following command: You can obtain the subnet by running the following command: You can obtain the vnet by running the following command:  | ||
| 2 | Specify the node label to add. | ||
| 3 | Specify the infrastructure ID, node label, and region. | ||
| 4 | Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Using the Azure Marketplace offering". | ||
| 5 | Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2suffix, while V1 images have the same name without the suffix. | ||
| 6 | Specify the region to place machines on. | ||
| 7 | Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name>field and the corresponding tag value in<custom_tag_value>field. | ||
| 8 | Specify the zone within your region to place machines on.
Ensure that your region supports the zone that you specify. 
 | 
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Deploy an OKD cluster.
Install the OpenShift cli (oc).
Log in to oc as a user with cluster-admin permission.
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.
Ensure that you set the <clusterID> and <role> parameter values.
Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-apiNAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1d   0         0                             55m
agl030519-vplxk-worker-us-east-1e   0         0                             55m
agl030519-vplxk-worker-us-east-1f   0         0                             55mTo view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \
  -n openshift-machine-api -o yamlapiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  name: <infrastructure_id>-<role> (2)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <role>
        machine.openshift.io/cluster-api-machine-type: <role>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
    spec:
      providerSpec: (3)
        ...| 1 | The cluster infrastructure ID. | ||
| 2 | A default node label. 
 | ||
| 3 | The values in the <providerSpec>section of the compute machine set CR are platform-specific. For more information about<providerSpec>parameters in the CR, see the sample compute machine set CR configuration for your provider. | 
Create a MachineSet CR by running the following command:
$ oc create -f <file_name>.yamlView the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-apiNAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
agl030519-vplxk-worker-us-east-1d   0         0                             55m
agl030519-vplxk-worker-us-east-1e   0         0                             55m
agl030519-vplxk-worker-us-east-1f   0         0                             55mWhen the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
Your cluster uses a cluster autoscaler.
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  name: machine-set-name
spec:
  template:
    spec:
      metadata:
        labels:
          cluster-api/accelerator: nvidia-t4 (1)| 1 | Specify a label of your choice that consists of alphanumeric characters, -,_, or.and starts and ends with an alphanumeric character.
For example, you might usenvidia-t4to represent Nvidia T4 GPUs, ornvidia-a10gfor A10G GPUs.
 | 
You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following:
While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher.
The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OKD are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image.
| Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. You should only modify the FCOS image for compute machines to use an Azure Marketplace image. Control plane machines and infrastructure nodes do not require an OKD subscription and use the public RHCOS default image by default, which does not incur subscription costs on your Azure bill. Therefore, you should not modify the cluster default boot image or the control plane boot images. Applying the Azure Marketplace image to them will incur additional licensing costs that cannot be recovered. | 
You have installed the Azure cli client (az).
Your Azure account is entitled for the offer and you have logged into this account with the Azure cli client.
Display all of the available OKD images by running one of the following commands:
North America:
$  az vm image list --all --offer rh-ocp-worker --publisher redhat -o tableOffer          Publisher       Sku                 Urn                                                             Version
-------------  --------------  ------------------  --------------------------------------------------------------  -----------------
rh-ocp-worker  RedHat          rh-ocp-worker       RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409              4.15.2024072409
rh-ocp-worker  RedHat          rh-ocp-worker-gen1  RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409         4.15.2024072409EMEA:
$  az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o tableOffer          Publisher       Sku                 Urn                                                                     Version
-------------  --------------  ------------------  --------------------------------------------------------------          -----------------
rh-ocp-worker  redhat-limited  rh-ocp-worker       redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409              4.15.2024072409
rh-ocp-worker  redhat-limited  rh-ocp-worker-gen1  redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409         4.15.2024072409| Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process. | 
Inspect the image for your offer by running one of the following commands:
North America:
$ az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>EMEA:
$ az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>Review the terms of the offer by running one of the following commands:
North America:
$ az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>EMEA:
$ az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>Accept the terms of the offering by running one of the following commands:
North America:
$ az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>EMEA:
$ az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>Record the image details of your offer, specifically the values for publisher, offer, sku, and version.
Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer:
providerSpec image values for Azure Marketplace machinesproviderSpec:
  value:
    image:
      offer: rh-ocp-worker
      publisher: redhat
      resourceID: ""
      sku: rh-ocp-worker
      type: MarketplaceWithPlan
      version: 413.92.2023101700You can enable boot diagnostics on Azure machines that your machine set creates.
Have an existing Microsoft Azure cluster.
Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file:
For an Azure Managed storage account:
providerSpec:
  diagnostics:
    boot:
      storageAccountType: AzureManaged (1)| 1 | Specifies an Azure Managed storage account. | 
For an Azure Unmanaged storage account:
providerSpec:
  diagnostics:
    boot:
      storageAccountType: CustomerManaged (1)
      customerManaged:
        storageAccountURI: https://<storage-account>.blob.core.windows.net (2)| 1 | Specifies an Azure Unmanaged storage account. | 
| 2 | Replace <storage-account>with the name of your storage account. | 
| Only the Azure Blob Storage data service is supported. | 
On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine.
You can save on costs by creating a compute machine set running on Azure that deploys machines as non-guaranteed Spot VMs. Spot VMs utilize unused Azure capacity and are less expensive than standard VMs. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to the user when an interruption occurs. OKD begins to remove the workloads from the affected instances when Azure issues the termination warning.
Interruptions can occur when using Spot VMs for the following reasons:
The instance price exceeds your maximum price
The supply of Spot VMs decreases
Azure needs capacity back
When Azure terminates an instance, a termination handler running on the Spot VM node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot VM.
You can launch a Spot VM on Azure by adding spotVMOptions to your compute machine set YAML file.
Add the following line under the providerSpec field:
providerSpec:
  value:
    spotVMOptions: {}You can optionally set the spotVMOptions.maxPrice field to limit the cost of the Spot VM. For example you can set maxPrice: '0.98765'. If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to -1 and charges up to the standard VM price.
Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default maxPrice. However, an instance can still be evicted due to capacity restrictions.
| It is strongly recommended to use the default standard VM price as the  | 
You can create a compute machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging.
For more information, see the Microsoft Azure documentation about Ephemeral OS disks for Azure VMs.
You can launch machines on Ephemeral OS disks on Azure by editing your compute machine set YAML file.
Have an existing Microsoft Azure cluster.
Edit the custom resource (CR) by running the following command:
$ oc edit machineset <machine-set-name>where <machine-set-name> is the compute machine set that you want to provision machines on Ephemeral OS disks.
Add the following to the providerSpec field:
providerSpec:
  value:
    ...
    osDisk:
       ...
       diskSettings: (1)
         ephemeralStorageLocation: Local (1)
       cachingType: ReadOnly (1)
       managedDisk:
         storageAccountType: Standard_LRS (2)
       ...| 1 | These lines enable the use of Ephemeral OS disks. | 
| 2 | Ephemeral OS disks are only supported for VMs or scale set instances that use the Standard LRS storage account type. | 
| The implementation of Ephemeral OS disk support in OKD only supports the  | 
Create a compute machine set using the updated configuration:
$ oc create -f <machine-set-config>.yamlOn the Microsoft Azure portal, review the Overview page for a machine deployed by the compute machine set, and verify that the Ephemeral OS disk field is set to OS cache placement.
You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.
You can also create a persistent volume claim (PVC) that dynamically binds to a storage class backed by Azure ultra disks and mounts them to pods.
| Data disks do not support the ability to specify disk throughput or disk IOPS. You can configure these properties by using PVCs. | 
You can deploy machines with ultra disks on Azure by editing your machine set YAML file.
Have an existing Microsoft Azure cluster.
Create a custom secret in the openshift-machine-api namespace using the worker data secret by running the following command:
$ oc -n openshift-machine-api \
get secret <role>-user-data \ (1)
--template='{{index .data.userData | base64decode}}' | jq > userData.txt (2)| 1 | Replace <role>withworker. | 
| 2 | Specify userData.txtas the name of the new custom secret. | 
In a text editor, open the userData.txt file and locate the final } character in the file.
On the immediately preceding line, add a ,.
Create a new line after the , and add the following configuration details:
"storage": {
  "disks": [ (1)
    {
      "device": "/dev/disk/azure/scsi1/lun0", (2)
      "partitions": [ (3)
        {
          "label": "lun0p1", (4)
          "sizeMiB": 1024, (5)
          "startMiB": 0
        }
      ]
    }
  ],
  "filesystems": [ (6)
    {
      "device": "/dev/disk/by-partlabel/lun0p1",
      "format": "xfs",
      "path": "/var/lib/lun0p1"
    }
  ]
},
"systemd": {
  "units": [ (7)
    {
      "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", (8)
      "enabled": true,
      "name": "var-lib-lun0p1.mount"
    }
  ]
}| 1 | The configuration details for the disk that you want to attach to a node as an ultra disk. | 
| 2 | Specify the lunvalue that is defined in thedataDisksstanza of the machine set you are using. For example, if the machine set containslun: 0, specifylun0. You can initialize multiple data disks by specifying multiple"disks"entries in this configuration file. If you specify multiple"disks"entries, ensure that thelunvalue for each matches the value in the machine set. | 
| 3 | The configuration details for a new partition on the disk. | 
| 4 | Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1for the first partition oflun0. | 
| 5 | Specify the total size in MiB of the partition. | 
| 6 | Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition. | 
| 7 | Specify a systemdunit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple"partitions"entries in this configuration file. If you specify multiple"partitions"entries, you must specify asystemdunit for each. | 
| 8 | For Where, specify the value ofstorage.filesystems.path. ForWhat, specify the value ofstorage.filesystems.device. | 
Extract the disabling template value to a file called disableTemplating.txt by running the following command:
$ oc -n openshift-machine-api get secret <role>-user-data \ (1)
--template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt| 1 | Replace <role>withworker. | 
Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command:
$ oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ (1)
--from-file=userData=userData.txt \
--from-file=disableTemplating=disableTemplating.txt| 1 | For <role>-user-data-x5, specify the name of the secret. Replace<role>withworker. | 
Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command:
$ oc edit machineset <machine_set_name>where <machine_set_name> is the machine set that you want to provision machines with ultra disks.
Add the following lines in the positions indicated:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
spec:
  template:
    spec:
      metadata:
        labels:
          disk: ultrassd (1)
      providerSpec:
        value:
          ultraSSDCapability: Enabled (2)
          dataDisks: (2)
          - nameSuffix: ultrassd
            lun: 0
            diskSizeGB: 4
            deletionPolicy: Delete
            cachingType: None
            managedDisk:
              storageAccountType: UltraSSD_LRS
          userDataSecret:
            name: <role>-user-data-x5 (3)| 1 | Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassdfor this value. | 
| 2 | These lines enable the use of ultra disks.
For dataDisks, include the entire stanza. | 
| 3 | Specify the user data secret created earlier. Replace <role>withworker. | 
Create a machine set using the updated configuration by running the following command:
$ oc create -f <machine_set_name>.yamlValidate that the machines are created by running the following command:
$ oc get machinesThe machines should be in the Running state.
For a machine that is running and has a node attached, validate the partition by running the following command:
$ oc debug node/<node_name> -- chroot /host lsblkIn this command, oc debug node/<node_name> starts a debugging shell on the node <node_name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.
To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example:
apiVersion: v1
kind: Pod
metadata:
  name: ssd-benchmark1
spec:
  containers:
  - name: ssd-benchmark1
    image: nginx
    ports:
      - containerPort: 80
        name: "http-server"
    volumeMounts:
    - name: lun0p1
      mountPath: "/tmp"
  volumes:
    - name: lun0p1
      hostPath:
        path: /var/lib/lun0p1
        type: DirectoryOrCreate
  nodeSelector:
    disktype: ultrassdUse the information in this section to understand and recover from issues you might encounter.
If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails.
For example, if the ultraSSDCapability parameter is set to Disabled, but an ultra disk is specified in the dataDisks parameter, the following error message appears:
StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.To resolve this issue, verify that your machine set configuration is correct.
If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message:
failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesclient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>."To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct.
You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API.
An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set.
Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example:
providerSpec:
  value:
    osDisk:
      diskSizeGB: 128
      managedDisk:
        diskEncryptionSet:
          id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name>
        storageAccountType: Premium_LRS| Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. | 
OKD 4.16 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance.
| Some feature combinations result in an invalid configuration. | 
| Secure Boot[1] | vTPM[2] | Valid configuration | 
|---|---|---|
| Enabled | Enabled | Yes | 
| Enabled | Disabled | Yes | 
| Enabled | Omitted | Yes | 
| Disabled | Enabled | Yes | 
| Omitted | Enabled | Yes | 
| Disabled | Disabled | No | 
| Omitted | Disabled | No | 
| Omitted | Omitted | No | 
Using the secureBoot field.
Using the virtualizedTrustedPlatformModule field.
For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines.
In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the providerSpec field to provide a valid configuration:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
# ...
spec:
  template:
    machines_v1beta1_machine_openshift_io:
      spec:
        providerSpec:
          value:
            securityProfile:
              settings:
                securityType: TrustedLaunch (1)
                trustedLaunch:
                  uefiSettings: (2)
                    secureBoot: Enabled (3)
                    virtualizedTrustedPlatformModule: Enabled (4)
# ...| 1 | Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations. | 
| 2 | Specifies which UEFI security features to use. This section is required for all valid configurations. | 
| 3 | Enables UEFI Secure Boot. | 
| 4 | Enables the use of a vTPM. | 
On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured.
| Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. | 
OKD 4.16 supports Azure confidential virtual machines (VMs).
| Confidential VMs are currently not supported on 64-bit ARM architectures. | 
By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance.
For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines.
In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the providerSpec field:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
# ...
spec:
  template:
    spec:
      providerSpec:
        value:
          osDisk:
            # ...
            managedDisk:
              securityProfile: (1)
                securityEncryptionType: VMGuestStateOnly (2)
            # ...
          securityProfile: (3)
            settings:
                securityType: ConfidentialVM (4)
                confidentialVM:
                  uefiSettings: (5)
                    secureBoot: Disabled (6)
                    virtualizedTrustedPlatformModule: Enabled (7)
          vmSize: Standard_DC16ads_v5 (8)
# ...| 1 | Specifies security profile settings for the managed disk when using a confidential VM. | 
| 2 | Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM. | 
| 3 | Specifies security profile settings for the confidential VM. | 
| 4 | Enables the use of confidential VMs. This value is required for all valid configurations. | 
| 5 | Specifies which UEFI security features to use. This section is required for all valid configurations. | 
| 6 | Disables UEFI Secure Boot. | 
| 7 | Enables the use of a vTPM. | 
| 8 | Specifies an instance type that supports confidential VMs. | 
On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured.
Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled during or after installation.
Consider the following limitations when deciding whether to use Accelerated Networking:
Accelerated Networking is only supported on clusters where the Machine API is operational.
Although the minimum requirement for an Azure worker node is two vCPUs,
Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation.
When this feature is enabled on an existing Azure cluster, only newly provisioned nodes are affected. Currently running nodes are not reconciled. To enable the feature on all nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas.
OKD version 4.16.3 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters.
You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds.
For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation.
| You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the previous machine set deployed. | 
You have access to the cluster with cluster-admin privileges.
You installed the OpenShift cli (oc).
You created a Capacity Reservation group.
For more information, see the Microsoft Azure documentation Create a Capacity Reservation.
In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the providerSpec field:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
# ...
spec:
  template:
    spec:
      providerSpec:
        value:
          capacityReservationGroupID: <capacity_reservation_group> (1)
# ...| 1 | Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on. | 
To verify machine deployment, list the machines that the machine set created by running the following command:
$ oc get machines.machine.openshift.io \
  -n openshift-machine-api \
  -l machine.openshift.io/cluster-api-machineset=<machine_set_name>where <machine_set_name> is the name of the compute machine set.
In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation.
You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the Azure cloud provider.
The following table lists the validated instance types:
| vmSize | NVIDIA GPU accelerator | Maximum number of GPUs | Architecture | 
|---|---|---|---|
| 
 | V100 | 4 | x86 | 
| 
 | T4 | 1 | x86 | 
| 
 | A100 | 8 | x86 | 
| By default, Azure subscriptions do not have a quota for the Azure instance types with GPU. Customers have to request a quota increase for the Azure instance families listed above. | 
View the machines and machine sets that exist in the openshift-machine-api namespace
by running the following command. Each compute machine set is associated with a different availability zone within the Azure region.
The installer automatically load balances compute machines across availability zones.
$ oc get machineset -n openshift-machine-apiNAME                              DESIRED   CURRENT   READY   AVAILABLE   AGE
myclustername-worker-centralus1   1         1         1       1           6h9m
myclustername-worker-centralus2   1         1         1       1           6h9m
myclustername-worker-centralus3   1         1         1       1           6h9mMake a copy of one of the existing compute MachineSet definitions and output the result to a YAML file by running the following command.
This will be the basis for the GPU-enabled compute machine set definition.
$ oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yamlView the content of the machineset:
$ cat machineset-azure.yamlmachineset-azure.yaml fileapiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  annotations:
    machine.openshift.io/GPU: "0"
    machine.openshift.io/memoryMb: "16384"
    machine.openshift.io/vCPU: "4"
  creationTimestamp: "2023-02-06T14:08:19Z"
  generation: 1
  labels:
    machine.openshift.io/cluster-api-cluster: myclustername
    machine.openshift.io/cluster-api-machine-role: worker
    machine.openshift.io/cluster-api-machine-type: worker
  name: myclustername-worker-centralus1
  namespace: openshift-machine-api
  resourceVersion: "23601"
  uid: acd56e0c-7612-473a-ae37-8704f34b80de
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: myclustername
      machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: myclustername
        machine.openshift.io/cluster-api-machine-role: worker
        machine.openshift.io/cluster-api-machine-type: worker
        machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1
    spec:
      lifecycleHooks: {}
      metadata: {}
      providerSpec:
        value:
          acceleratedNetworking: true
          apiVersion: machine.openshift.io/v1beta1
          credentialsSecret:
            name: azure-cloud-credentials
            namespace: openshift-machine-api
          diagnostics: {}
          image:
            offer: ""
            publisher: ""
            resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest
            sku: ""
            version: ""
          kind: AzureMachineProviderSpec
          location: centralus
          managedIdentity: myclustername-identity
          metadata:
            creationTimestamp: null
          networkResourceGroup: myclustername-rg
          osDisk:
            diskSettings: {}
            diskSizeGB: 128
            managedDisk:
              storageAccountType: Premium_LRS
            osType: Linux
          publicIP: false
          publicLoadBalancer: myclustername
          resourceGroup: myclustername-rg
          spotVMOptions: {}
          subnet: myclustername-worker-subnet
          userDataSecret:
            name: worker-user-data
          vmSize: Standard_D4s_v3
          vnet: myclustername-vnet
          zone: "1"
status:
  availableReplicas: 1
  fullyLabeledReplicas: 1
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1Make a copy of the machineset-azure.yaml file by running the following command:
$ cp machineset-azure.yaml machineset-azure-gpu.yamlUpdate the following fields in machineset-azure-gpu.yaml:
Change .metadata.name to a name containing gpu.
Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.
Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.
Change .spec.template.spec.providerSpec.value.vmSize to Standard_NC4as_T4_v3.
machineset-azure-gpu.yaml fileapiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  annotations:
    machine.openshift.io/GPU: "1"
    machine.openshift.io/memoryMb: "28672"
    machine.openshift.io/vCPU: "4"
  creationTimestamp: "2023-02-06T20:27:12Z"
  generation: 1
  labels:
    machine.openshift.io/cluster-api-cluster: myclustername
    machine.openshift.io/cluster-api-machine-role: worker
    machine.openshift.io/cluster-api-machine-type: worker
  name: myclustername-nc4ast4-gpu-worker-centralus1
  namespace: openshift-machine-api
  resourceVersion: "166285"
  uid: 4eedce7f-6a57-4abe-b529-031140f02ffa
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: myclustername
      machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: myclustername
        machine.openshift.io/cluster-api-machine-role: worker
        machine.openshift.io/cluster-api-machine-type: worker
        machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1
    spec:
      lifecycleHooks: {}
      metadata: {}
      providerSpec:
        value:
          acceleratedNetworking: true
          apiVersion: machine.openshift.io/v1beta1
          credentialsSecret:
            name: azure-cloud-credentials
            namespace: openshift-machine-api
          diagnostics: {}
          image:
            offer: ""
            publisher: ""
            resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest
            sku: ""
            version: ""
          kind: AzureMachineProviderSpec
          location: centralus
          managedIdentity: myclustername-identity
          metadata:
            creationTimestamp: null
          networkResourceGroup: myclustername-rg
          osDisk:
            diskSettings: {}
            diskSizeGB: 128
            managedDisk:
              storageAccountType: Premium_LRS
            osType: Linux
          publicIP: false
          publicLoadBalancer: myclustername
          resourceGroup: myclustername-rg
          spotVMOptions: {}
          subnet: myclustername-worker-subnet
          userDataSecret:
            name: worker-user-data
          vmSize: Standard_NC4as_T4_v3
          vnet: myclustername-vnet
          zone: "1"
status:
  availableReplicas: 1
  fullyLabeledReplicas: 1
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command:
$ diff machineset-azure.yaml machineset-azure-gpu.yaml14c14
<   name: myclustername-worker-centralus1
---
>   name: myclustername-nc4ast4-gpu-worker-centralus1
23c23
<       machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1
---
>       machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1
30c30
<         machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1
---
>         machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1
67c67
<           vmSize: Standard_D4s_v3
---
>           vmSize: Standard_NC4as_T4_v3Create the GPU-enabled compute machine set from the definition file by running the following command:
$ oc create -f machineset-azure-gpu.yamlmachineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 createdView the machines and machine sets that exist in the openshift-machine-api namespace
by running the following command. Each compute machine set is associated with a
different availability zone within the Azure region.
The installer automatically load balances compute machines across availability zones.
$ oc get machineset -n openshift-machine-apiNAME                                               DESIRED   CURRENT   READY   AVAILABLE   AGE
clustername-n6n4r-nc4ast4-gpu-worker-centralus1    1         1         1       1           122m
clustername-n6n4r-worker-centralus1                1         1         1       1           8h
clustername-n6n4r-worker-centralus2                1         1         1       1           8h
clustername-n6n4r-worker-centralus3                1         1         1       1           8hView the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone.
$ oc get machines -n openshift-machine-apiNAME                                                PHASE     TYPE                   REGION      ZONE   AGE
myclustername-master-0                              Running   Standard_D8s_v3        centralus   2      6h40m
myclustername-master-1                              Running   Standard_D8s_v3        centralus   1      6h40m
myclustername-master-2                              Running   Standard_D8s_v3        centralus   3      6h40m
myclustername-nc4ast4-gpu-worker-centralus1-w9bqn   Running      centralus   1      21m
myclustername-worker-centralus1-rbh6b               Running   Standard_D4s_v3        centralus   1      6h38m
myclustername-worker-centralus2-dbz7w               Running   Standard_D4s_v3        centralus   2      6h38m
myclustername-worker-centralus3-p9b8c               Running   Standard_D4s_v3        centralus   3      6h38mView the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific Azure region and OKD role.
$ oc get nodesNAME                                                STATUS   ROLES                  AGE     VERSION
myclustername-master-0                              Ready    control-plane,master   6h39m   v1.29.4
myclustername-master-1                              Ready    control-plane,master   6h41m   v1.29.4
myclustername-master-2                              Ready    control-plane,master   6h39m   v1.29.4
myclustername-nc4ast4-gpu-worker-centralus1-w9bqn   Ready    worker                 14m     v1.29.4
myclustername-worker-centralus1-rbh6b               Ready    worker                 6h29m   v1.29.4
myclustername-worker-centralus2-dbz7w               Ready    worker                 6h29m   v1.29.4
myclustername-worker-centralus3-p9b8c               Ready    worker                 6h31m   v1.29.4View the list of compute machine sets:
$ oc get machineset -n openshift-machine-apiNAME                                   DESIRED   CURRENT   READY   AVAILABLE   AGE
myclustername-worker-centralus1        1         1         1       1           8h
myclustername-worker-centralus2        1         1         1       1           8h
myclustername-worker-centralus3        1         1         1       1           8hCreate the GPU-enabled compute machine set from the definition file by running the following command:
$ oc create -f machineset-azure-gpu.yamlView the list of compute machine sets:
oc get machineset -n openshift-machine-apiNAME                                          DESIRED   CURRENT   READY   AVAILABLE   AGE
myclustername-nc4ast4-gpu-worker-centralus1   1         1         1       1           121m
myclustername-worker-centralus1               1         1         1       1           8h
myclustername-worker-centralus2               1         1         1       1           8h
myclustername-worker-centralus3               1         1         1       1           8hView the machine set you created by running the following command:
$ oc get machineset -n openshift-machine-api | grep gpuThe MachineSet replica count is set to 1 so a new Machine object is created automatically.
myclustername-nc4ast4-gpu-worker-centralus1   1         1         1       1           121mView the Machine object that the machine set created by running the following command:
$ oc -n openshift-machine-api get machines | grep gpumyclustername-nc4ast4-gpu-worker-centralus1-w9bqn   Running   Standard_NC4as_T4_v3   centralus   1      21m| There is no need to specify a namespace for the node. The node definition is cluster scoped. | 
After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OKD.
Install the Node Feature Discovery Operator from OperatorHub in the OKD console.
After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace.
Verify that the Operator is installed and running by running the following command:
$ oc get pods -n openshift-nfdNAME                                       READY    STATUS     RESTARTS   AGE
nfd-controller-manager-8646fcbb65-x5qgk    2/2      Running 7  (8h ago)   1dBrowse to the installed Oerator in the console and select Create Node Feature Discovery.
Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OKD nodes for hardware resources and catalogue them.
After a successful build, verify that a NFD pod is running on each nodes by running the following command:
$ oc get pods -n openshift-nfdNAME                                       READY   STATUS      RESTARTS        AGE
nfd-controller-manager-8646fcbb65-x5qgk    2/2     Running     7 (8h ago)      12d
nfd-master-769656c4cb-w9vrv                1/1     Running     0               12d
nfd-worker-qjxb2                           1/1     Running     3 (3d14h ago)   12d
nfd-worker-xtz9b                           1/1     Running     5 (3d14h ago)   12dThe NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de.
View the NVIDIA GPU discovered by the NFD Operator by running the following command:
$ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'Roles: worker
feature.node.kubernetes.io/pci-1013.present=true
feature.node.kubernetes.io/pci-10de.present=true
feature.node.kubernetes.io/pci-1d0f.present=true10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.
You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file.
Have an existing Microsoft Azure cluster where the Machine API is operational.
Add the following to the providerSpec field:
providerSpec:
  value:
    acceleratedNetworking: true (1)
    vmSize: <azure-vm-size> (2)| 1 | This line enables Accelerated Networking. | 
| 2 | Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation. | 
To enable the feature on currently running nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas.
On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled.