This is a cache of https://docs.openshift.com/container-platform/4.2/nodes/pods/nodes-pods-node-selectors.html. It is a snapshot of the page at 2024-11-28T02:35:38.458+0000.
Placing pods on specific nodes using node selectors - Working with pods | Nodes | OpenShift Container Platform 4.2
×

A node selector specifies a map of key-value pairs. The rules are defined using custom labels on nodes and selectors specified in pods.

For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node.

If you are using node affinity and node selectors in the same pod configuration, see the important considerations below.

Using node selectors to control pod placement

You can use node selector labels on pods to control where the pod is scheduled.

With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels.

You can add labels to a node or MachineConfig, but the labels will not persist if the node or machine goes down. Adding the label to the MachineSet ensures that new nodes or machines will have the label.

To add node selectors to an existing pod, add a node selector to the controlling object for that node, such as a ReplicaSet, Daemonset, or StatefulSet. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec.

You cannot add a node selector to an existing scheduled pod.

Prerequisites

If you want to add a node selector to existing pods, determine the controlling object for that pod. For exeample, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 ReplicaSet:

$ oc describe pod router-default-66d5cf9464-7pwkc

Name:               router-default-66d5cf9464-7pwkc
Namespace:          openshift-ingress

....

Controlled By:      ReplicaSet/router-default-66d5cf9464

The web console lists the controlling object under ownerReferences in the pod YAML:

  ownerReferences:
    - apiVersion: apps/v1
      kind: ReplicaSet
      name: router-default-66d5cf9464
      uid: d81dd094-da26-11e9-a48a-128e7edf0312
      controller: true
      blockOwnerDeletion: true
Procedure
  1. Add the desired label to your nodes:

    $ oc label <resource> <name> <key>=<value>

    For example, to label a node:

    $ oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east

    The label is applied to the node:

    kind: Node
    apiVersion: v1
    metadata:
      name: ip-10-0-131-14.ec2.internal
      selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal
      uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74
      resourceVersion: '478704'
      creationTimestamp: '2019-06-10T14:46:08Z'
      labels:
        beta.kubernetes.io/os: linux
        failure-domain.beta.kubernetes.io/zone: us-east-1a
        node.openshift.io/os_version: '4.2'
        node-role.kubernetes.io/worker: ''
        failure-domain.beta.kubernetes.io/region: us-east-1
        node.openshift.io/os_id: rhcos
        beta.kubernetes.io/instance-type: m4.large
        kubernetes.io/hostname: ip-10-0-131-14
        region: east (1)
        beta.kubernetes.io/arch: amd64
        type: user-node (1)
    ....
    1 Specify the label(s) you will add to the node.

    Alternatively, you can add the label to a MachineSet:

    $ oc edit MachineSet abc612-msrtw-worker-us-east-1c
    apiVersion: machine.openshift.io/v1beta1
    kind: MachineSet
    
    ....
    
    spec:
      replicas: 2
      selector:
        matchLabels:
          machine.openshift.io/cluster-api-cluster: ci-ln-89dz2y2-d5d6b-4995x
          machine.openshift.io/cluster-api-machine-role: worker
          machine.openshift.io/cluster-api-machine-type: worker
          machine.openshift.io/cluster-api-machineset: ci-ln-89dz2y2-d5d6b-4995x-worker-us-east-1a
      template:
        metadata:
          creationTimestamp: null
          labels:
            machine.openshift.io/cluster-api-cluster: ci-ln-89dz2y2-d5d6b-4995x
            machine.openshift.io/cluster-api-machine-role: worker
            machine.openshift.io/cluster-api-machine-type: worker
            machine.openshift.io/cluster-api-machineset: ci-ln-89dz2y2-d5d6b-4995x-worker-us-east-1a
        spec:
          metadata:
            creationTimestamp: null
            labels:
              region: east (1)
              type: user-node (1)
    ....
    1 Specify the label(s) you will add to the node.
  2. Add the desired node selector a pod:

    • To add a node selector to existing and furture pods, add a node selector to the controlling object for the pods:

      For example:

      kind: ReplicaSet
      
      ....
      
      spec:
      
      ....
      
        template:
          metadata:
            creationTimestamp: null
            labels:
              ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
              pod-template-hash: 66d5cf9464
          spec:
            nodeSelector:
              beta.kubernetes.io/os: linux
              node-role.kubernetes.io/worker: ''
              type: user-node (1)
      1 Add the desired node selector.
    • For a new pod, you can add the selector to the pod specification directly:

      apiVersion: v1
      kind: Pod
      
      ...
      
      spec:
        nodeSelector:
          <key>: <value>
      
      ...

      For example:

      apiVersion: v1
      kind: Pod
      
      ....
      
      spec:
        nodeSelector:
          region: east
          type: user-node

If you are using node selectors and node affinity in the same pod configuration, note the following:

  • If you configure both nodeSelector and nodeAffinity, both conditions must be satisfied for the pod to be scheduled onto a candidate node.

  • If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied.

  • If you specify multiple matchExpressions associated with nodeSelectorTerms, then the pod can be scheduled onto a node only if all matchExpressions are satisfied.