This is a cache of https://docs.okd.io/4.8/nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.html. It is a snapshot of the page at 2024-11-29T01:00:24.301+0000.
Controlling pod placement using pod topology spread constraints - Controlling pod placement onto nodes (scheduling) | Nodes | OKD 4.8
×

You can use pod topology spread constraints to control the placement of your pods across nodes, zones, regions, or other user-defined topology domains.

About pod topology spread constraints

By using a pod topology spread constraint, you provide fine-grained control over the distribution of pods across failure domains to help achieve high availability and more efficient resource utilization.

OKD administrators can label nodes to provide topology information, such as regions, zones, nodes, or other user-defined domains. After these labels are set on nodes, users can then define pod topology spread constraints to control the placement of pods across these topology domains.

You specify which pods to group together, which topology domains they are spread among, and the acceptable skew. Only pods within the same namespace are matched and grouped together when spreading due to a constraint.

Configuring pod topology spread constraints

The following steps demonstrate how to configure pod topology spread constraints to distribute pods that match the specified labels based on their zone.

You can specify multiple pod topology spread constraints, but you must ensure that they do not conflict with each other. All pod topology spread constraints must be satisfied for a pod to be placed.

Prerequisites
  • A cluster administrator has added the required labels to nodes.

Procedure
  1. Create a Pod spec and specify a pod topology spread constraint:

    Example pod-spec.yaml file
    apiVersion: v1
    kind: Pod
    metadata:
      name: my-pod
      labels:
        foo: bar
    spec:
      topologySpreadConstraints:
      - maxSkew: 1 (1)
        topologyKey: topology.kubernetes.io/zone (2)
        whenUnsatisfiable: DoNotSchedule (3)
        labelSelector: (4)
          matchLabels:
            foo: bar (5)
      containers:
      - image: "docker.io/ocpqe/hello-pod"
        name: hello-pod
    1 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0.
    2 The key of a node label. Nodes with this key and identical value are considered to be in the same topology.
    3 How to handle a pod if it does not satisfy the spread constraint. The default is DoNotSchedule, which tells the scheduler not to schedule the pod. Set to ScheduleAnyway to still schedule the pod, but the scheduler prioritizes honoring the skew to not make the cluster more imbalanced.
    4 Pods that match this label selector are counted and recognized as a group when spreading to satisfy the constraint. Be sure to specify a label selector, otherwise no pods can be matched.
    5 Be sure that this Pod spec also sets its labels to match this label selector if you want it to be counted properly in the future.
  2. Create the pod:

    $ oc create -f pod-spec.yaml

Example pod topology spread constraints

The following examples demonstrate pod topology spread constraint configurations.

Single pod topology spread constraint example

This example Pod spec defines one pod topology spread constraint. It matches on pods labeled foo:bar, distributes among zones, specifies a skew of 1, and does not schedule the pod if it does not meet these requirements.

kind: Pod
apiVersion: v1
metadata:
  name: my-pod
  labels:
    foo: bar
spec:
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: topology.kubernetes.io/zone
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        foo: bar
  containers:
  - image: "docker.io/ocpqe/hello-pod"
    name: hello-pod

Multiple pod topology spread constraints example

This example Pod spec defines two pod topology spread constraints. Both match on pods labeled foo:bar, specify a skew of 1, and do not schedule the pod if it does not meet these requirements.

The first constraint distributes pods based on a user-defined label node, and the second constraint distributes pods based on a user-defined label rack. Both constraints must be met for the pod to be scheduled.

kind: Pod
apiVersion: v1
metadata:
  name: my-pod-2
  labels:
    foo: bar
spec:
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: node
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        foo: bar
  - maxSkew: 1
    topologyKey: rack
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        foo: bar
  containers:
  - image: "docker.io/ocpqe/hello-pod"
    name: hello-pod