The ClusterAutoscaler adjusts the size of an OpenShift Container Platform cluster to meet
its current deployment needs. It uses declarative, Kubernetes-style arguments to
provide infrastructure management that does not rely on objects of a specific
cloud provider. The ClusterAutoscaler has a cluster scope, and is not associated
with a particular namespace.
The ClusterAutoscaler increases the size of the cluster when there are Pods
that failed to schedule on any of the current nodes due to insufficient
resources or when another node is necessary to meet deployment needs. The
ClusterAutoscaler does not increase the cluster resources beyond the limits
that you specify.
|
Ensure that the maxNodesTotal value in the ClusterAutoscaler definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to.
|
The ClusterAutoscaler decreases the size of the cluster when some nodes are
consistently not needed for a significant period, such as when it has low
resource use and all of its important Pods can fit on other nodes.
If the following types of Pods are present on a node, the ClusterAutoscaler
will not remove the node:
-
Pods with restrictive PodDisruptionBudgets (PDBs).
-
Kube-system Pods that do not run on the node by default.
-
Kube-system Pods that do not have a PDBB or have a PDB that is too restrictive.
-
Pods that are not backed by a controller object such as a Deployment,
ReplicaSet, or StatefulSet.
-
Pods with local storage.
-
Pods that cannot be moved elsewhere because of a lack of resources,
incompatible node selectors or affinity, matching anti-affinity, and so on.
-
Unless they also have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
annotation, Pods that have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
annotation.
If you configure the ClusterAutoscaler, additional usage restrictions apply:
-
Do not modify the nodes that are in autoscaled node groups directly. All nodes
within the same node group have the same capacity and labels and run the same
system Pods.
-
Specify requests for your Pods.
-
If you have to prevent Pods from being deleted too quickly, configure
appropriate PDBs.
-
Confirm that your cloud provider quota is large enough to support the
maximum node pools that you configure.
-
Do not run additional node group autoscalers, especially the ones offered by
your cloud provider.
The Horizontal Pod Autoscaler (HPA) and the ClusterAutoscaler modify cluster
resources in different ways. The HPA changes the deployment’s or ReplicaSet’s
number of replicas based on the current CPU load.
If the load increases, the HPA creates new replicas, regardless of the amount
of resources available to the cluster.
If there are not enough resources, the ClusterAutoscaler adds resources so that
the HPA-created Pods can run.
If the load decreases, the HPA stops some replicas. If this action causes some
nodes to be underutilized or completely empty, the ClusterAutoscaler deletes
the unnecessary nodes.
The ClusterAutoscaler takes Pod priorities into account. The Pod Priority and
Preemption feature enables scheduling Pods based on priorities if the cluster
does not have enough resources, but the ClusterAutoscaler ensures that the
cluster has resources to run all Pods. To honor the intention of both features,
the ClusterAutoscaler inclues a priority cutoff function. You can use this cutoff to
schedule "best-effort" Pods, which do not cause the ClusterAutoscaler to
increase resources but instead run only when spare resources are available.
Pods with priority lower than the cutoff value do not cause the cluster to scale
up or prevent the cluster from scaling down. No new nodes are added to run the
Pods, and nodes running these Pods might be deleted to free resources.