This is a cache of https://docs.openshift.com/container-platform/4.15/machine_management/creating_machinesets/creating-machineset-osp.html. It is a snapshot of the page at 2024-11-28T12:41:17.567+0000.
Creating a compute machine set on OpenStack - Managing compute machines with the Machine API | Machine management | OpenShift Container Platform 4.15
×

You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

$ oc get infrastructure cluster -o jsonpath='{.status.platform}'

Sample YAML for a compute machine set custom resource on RHOSP

This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    machine.openshift.io/cluster-api-machine-role: <role> (2)
    machine.openshift.io/cluster-api-machine-type: <role> (2)
  name: <infrastructure_id>-<role> (3)
  namespace: openshift-machine-api
spec:
  replicas: <number_of_replicas>
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> (3)
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
        machine.openshift.io/cluster-api-machine-role: <role> (2)
        machine.openshift.io/cluster-api-machine-type: <role> (2)
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> (3)
    spec:
      providerSpec:
        value:
          apiVersion: openstackproviderconfig.openshift.io/v1alpha1
          cloudName: openstack
          cloudsSecret:
            name: openstack-cloud-credentials
            namespace: openshift-machine-api
          flavor: <nova_flavor>
          image: <glance_image_name_or_location>
          serverGroupID: <optional_UUID_of_server_group> (4)
          kind: OpenstackProviderSpec
          networks: (5)
          - filter: {}
            subnets:
            - filter:
                name: <subnet_name>
                tags: openshiftClusterID=<infrastructure_id> (1)
          primarySubnet: <rhosp_subnet_UUID> (6)
          securityGroups:
          - filter: {}
            name: <infrastructure_id>-worker (1)
          serverMetadata:
            Name: <infrastructure_id>-worker (1)
            openshiftClusterID: <infrastructure_id> (1)
          tags:
          - openshiftClusterID=<infrastructure_id> (1)
          trunk: true
          userDataSecret:
            name: worker-user-data (2)
          availabilityZone: <optional_openstack_availability_zone>
1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 Specify the node label to add.
3 Specify the infrastructure ID and node label.
4 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group. For most deployments, anti-affinity or soft-anti-affinity policies are recommended.
5 Required for deployments to multiple networks. To specify multiple networks, add another entry in the networks array. Also, you must include the network that is used as the primarySubnet value.
6 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file.

Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP

If you configured your cluster for single-root I/O virtualization (SR-IOV), you can create compute machine sets that use that technology.

This sample YAML defines a compute machine set that uses SR-IOV networks. The nodes that it creates are labeled with node-role.openshift.io/<node_role>: ""

In this sample, infrastructure_id is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and node_role is the node label to add.

The sample assumes two SR-IOV networks that are named "radio" and "uplink". The networks are used in port definitions in the spec.template.spec.providerSpec.value.ports list.

Only parameters that are specific to SR-IOV deployments are described in this sample. To review a more general sample, see "Sample YAML for a compute machine set custom resource on RHOSP".

An example compute machine set that uses SR-IOV networks
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id>
    machine.openshift.io/cluster-api-machine-role: <node_role>
    machine.openshift.io/cluster-api-machine-type: <node_role>
  name: <infrastructure_id>-<node_role>
  namespace: openshift-machine-api
spec:
  replicas: <number_of_replicas>
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role>
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <node_role>
        machine.openshift.io/cluster-api-machine-type: <node_role>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role>
    spec:
      metadata:
      providerSpec:
        value:
          apiVersion: openstackproviderconfig.openshift.io/v1alpha1
          cloudName: openstack
          cloudsSecret:
            name: openstack-cloud-credentials
            namespace: openshift-machine-api
          flavor: <nova_flavor>
          image: <glance_image_name_or_location>
          serverGroupID: <optional_UUID_of_server_group>
          kind: OpenstackProviderSpec
          networks:
            - subnets:
              - UUID: <machines_subnet_UUID>
          ports:
            - networkID: <radio_network_UUID> (1)
              nameSuffix: radio
              fixedIPs:
                - subnetID: <radio_subnet_UUID> (2)
              tags:
                - sriov
                - radio
              vnicType: direct (3)
              portSecurity: false (4)
            - networkID: <uplink_network_UUID> (1)
              nameSuffix: uplink
              fixedIPs:
                - subnetID: <uplink_subnet_UUID> (2)
              tags:
                - sriov
                - uplink
              vnicType: direct (3)
              portSecurity: false (4)
          primarySubnet: <machines_subnet_UUID>
          securityGroups:
          - filter: {}
            name: <infrastructure_id>-<node_role>
          serverMetadata:
            Name: <infrastructure_id>-<node_role>
            openshiftClusterID: <infrastructure_id>
          tags:
          - openshiftClusterID=<infrastructure_id>
          trunk: true
          userDataSecret:
            name: <node_role>-user-data
          availabilityZone: <optional_openstack_availability_zone>
1 Enter a network UUID for each port.
2 Enter a subnet UUID for each port.
3 The value of the vnicType parameter must be direct for each port.
4 The value of the portSecurity parameter must be false for each port.

You cannot set security groups and allowed address pairs for ports when port security is disabled. Setting security groups on the instance applies the groups to all ports that are attached to it.

After you deploy compute machines that are SR-IOV-capable, you must label them as such. For example, from a command line, enter:

$ oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable="true"

Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix>. The nameSuffix field is required in port definitions.

You can enable trunking for each port.

Optionally, you can add tags to ports as part of their tags lists.

Sample YAML for SR-IOV deployments where port security is disabled

To create single-root I/O virtualization (SR-IOV) ports on a network that has port security disabled, define a compute machine set that includes the ports as items in the spec.template.spec.providerSpec.value.ports list. This difference from the standard SR-IOV compute machine set is due to the automatic security group and allowed address pair configuration that occurs for ports that are created by using the network and subnet interfaces.

Ports that you define for machines subnets require:

  • Allowed address pairs for the API and ingress virtual IP ports

  • The compute security group

  • Attachment to the machines network and subnet

Only parameters that are specific to SR-IOV deployments where port security is disabled are described in this sample. To review a more general sample, see Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP".

An example compute machine set that uses SR-IOV networks and has port security disabled
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructure_id>
    machine.openshift.io/cluster-api-machine-role: <node_role>
    machine.openshift.io/cluster-api-machine-type: <node_role>
  name: <infrastructure_id>-<node_role>
  namespace: openshift-machine-api
spec:
  replicas: <number_of_replicas>
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role>
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        machine.openshift.io/cluster-api-machine-role: <node_role>
        machine.openshift.io/cluster-api-machine-type: <node_role>
        machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role>
    spec:
      metadata: {}
      providerSpec:
        value:
          apiVersion: openstackproviderconfig.openshift.io/v1alpha1
          cloudName: openstack
          cloudsSecret:
            name: openstack-cloud-credentials
            namespace: openshift-machine-api
          flavor: <nova_flavor>
          image: <glance_image_name_or_location>
          kind: OpenstackProviderSpec
          ports:
            - allowedAddressPairs: (1)
              - ipAddress: <API_VIP_port_IP>
              - ipAddress: <ingress_VIP_port_IP>
              fixedIPs:
                - subnetID: <machines_subnet_UUID> (2)
              nameSuffix: nodes
              networkID: <machines_network_UUID> (2)
              securityGroups:
                  - <compute_security_group_UUID> (3)
            - networkID: <SRIOV_network_UUID>
              nameSuffix: sriov
              fixedIPs:
                - subnetID: <SRIOV_subnet_UUID>
              tags:
                - sriov
              vnicType: direct
              portSecurity: False
          primarySubnet: <machines_subnet_UUID>
          serverMetadata:
            Name: <infrastructure_ID>-<node_role>
            openshiftClusterID: <infrastructure_id>
          tags:
          - openshiftClusterID=<infrastructure_id>
          trunk: false
          userDataSecret:
            name: worker-user-data
1 Specify allowed address pairs for the API and ingress ports.
2 Specify the machines network and subnet.
3 Specify the compute machines security group.

Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix>. The nameSuffix field is required in port definitions.

You can enable trunking for each port.

Optionally, you can add tags to ports as part of their tags lists.

Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites
  • Deploy an OpenShift Container Platform cluster.

  • Install the OpenShift CLI (oc).

  • Log in to oc as a user with cluster-admin permission.

Procedure
  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      $ oc get machinesets -n openshift-machine-api
      Example output
      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m
    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      $ oc get machineset <machineset_name> \
        -n openshift-machine-api -o yaml
      Example output
      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
        name: <infrastructure_id>-<role> (2)
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: <infrastructure_id>
            machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: <infrastructure_id>
              machine.openshift.io/cluster-api-machine-role: <role>
              machine.openshift.io/cluster-api-machine-type: <role>
              machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
          spec:
            providerSpec: (3)
              ...
      1 The cluster infrastructure ID.
      2 A default node label.

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    $ oc create -f <file_name>.yaml
Verification
  • View the list of compute machine sets by running the following command:

    $ oc get machineset -n openshift-machine-api
    Example output
    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.