-
25% of the first 4 GiB of memory
-
20% of the next 4 GiB of memory (up to 8 GiB)
-
10% of the next 8 GiB of memory (up to 16 GiB)
-
6% of the next 112 GiB of memory (up to 128 GiB)
-
2% of any memory above 128 GiB
By default, upon node start up OKD automatically calculates and reserves a portion of the CPU and memory resources for use by the underlying node components, such as kubelet and kube-proxy, and the remaining system components, such as sshd and NetworkManager. Review the information in this section to determine if these automatic settings are appropriate for your cluster.
You can modify the CPU and memory resources for these node and system components, as needed, by creating a Kubelet Config CR.
|
If you updated your cluster from a version earlier than 4.21, automatic allocation of system resources is disabled by default. To enable the feature, delete the |
CPU and memory resources reserved for node components in OKD are based on two node settings: kube-reserved and system-reserved. These settings are automatically configured upon node start up, or you can manually set these values as needed.
| Setting | Description |
|---|---|
|
This setting is not used with OKD. Add the CPU and memory resources that you planned to reserve to the |
|
This setting identifies the resources to reserve for the node components and system components, such as CRI-O and Kubelet. The default settings depend on the OKD and Machine Config Operator versions. Confirm the default |
If a flag is not set, the defaults are used. If none of the flags are set, the allocated resource is set to the node’s capacity as it was before the introduction of allocatable resources.
|
Any CPUs specifically reserved using the |
By default, OKD uses a script on each worker node to automatically determine the optimal system-reserved CPU and memory resources for nodes. The script runs on node start up and uses the following calculations by default.
The memory reservation is weighted. For smaller nodes, OKD reserves a higher percentage of memory. For larger nodes, OKD reserves a smaller percentage of the remaining capacity.
OKD uses the following calculations to determine how much memory to reserve for node and system components:
25% of the first 4 GiB of memory
20% of the next 4 GiB of memory (up to 8 GiB)
10% of the next 8 GiB of memory (up to 16 GiB)
6% of the next 112 GiB of memory (up to 128 GiB)
2% of any memory above 128 GiB
For example, on node with 16 GiB of memory, OKD reserves 2.6 GiB for node and system components, leaving approximately 13.4 GiB for workloads.
OKD uses the following logic to determine how much CPU to reserve for node and system components:
OKD starts with a base allocation for 1 CPU in fractions of a core (60 millicores = 0.06 CPU core).
Then, it increments 12 millicores (0.012 CPU) for every additional core beyond the first.
The result is compared against a minimum floor of 0.5 CPU. If the calculated value is less than 0.5, the system enforces a reservation of 0.5 CPU.
For example, on a 4-core node, 0.5 vCPU is reserved for node and system components, leaving 3.5 vCPUs for workloads. Note that 1000 millicores is equal to 1CPU/vCPU.
|
If you updated your cluster from a version earlier than 4.21, automatic allocation of system resources is disabled by default. To enable the feature, delete the |
An allocated amount of a resource is computed based on the following formula:
[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]
|
The withholding of |
If Allocatable is negative, it is set to 0.
Each node reports the system resources that are used by the container runtime and kubelet. To simplify configuring the system-reserved parameter, view the resource use for the node by using the node summary API. The node summary is available at /api/v1/nodes/<node>/proxy/stats/summary. For more information, use the "Node metrics data" link in the Additional resources section.
The node is able to limit the total amount of resources that pods can consume based on the configured allocatable value. This feature significantly improves the reliability of the node by preventing pods from using CPU and memory resources that are needed by system services such as the container runtime and node agent. To improve node reliability, administrators should reserve resources based on a target for resource use.
The node enforces resource constraints by using a new cgroup hierarchy that enforces quality of service. All pods are launched in a dedicated cgroup hierarchy that is separate from system daemons.
Administrators should treat system daemons similar to pods that have a guaranteed quality of service. System daemons can burst within their bounding control groups and this behavior must be managed as part of cluster deployments. Reserve CPU and memory resources for system daemons by specifying the amount of CPU and memory resources in system-reserved.
|
Enforcing |
If a node is under memory pressure, it can impact the entire node and all pods running on the node. For example, a system daemon that uses more than its reserved amount of memory can trigger an out-of-memory event. To avoid or reduce the probability of system out-of-memory events, the node provides out-of-resource handling.
You can reserve some memory using the --eviction-hard flag. The node attempts to evict
pods whenever memory availability on the node drops below the absolute value or percentage.
If system daemons do not exist on a node, pods are limited to the memory
capacity - eviction-hard. For this reason, resources set aside as a buffer for eviction
before reaching out of memory conditions are not available for pods.
The following is an example to illustrate the impact of node allocatable for memory:
Node capacity is 32Gi
--system-reserved is 3Gi
--eviction-hard is set to 100Mi.
For this node, the effective node allocatable value is 28.9Gi. If the node and system components use all their reservation, the memory available for pods is 28.9Gi, and kubelet evicts pods when it exceeds this threshold.
If you enforce node allocatable, 28.9Gi, with top-level cgroups, then pods can never exceed 28.9Gi. Evictions are not performed unless system daemons consume more than 3.1Gi of memory.
If system daemons do not use up all their reservation, with the above example,
pods would face memcg OOM kills from their bounding cgroup before node evictions kick in.
To better enforce QoS under this situation, the node applies the hard eviction thresholds to
the top-level cgroup for all pods to be Node Allocatable + Eviction Hard Thresholds.
If system daemons do not use up all their reservation, the node will evict pods whenever
they consume more than 28.9Gi of memory. If eviction does not occur in time, a pod
will be OOM killed if pods consume 29Gi of memory.
You can review the following information to learn how to limit the number of processes running on your nodes. Configuring an appropriate number of processes can help keep the nodes in your cluster running efficiently.
A process identifier (PID) is a unique identifier assigned by the Linux kernel to each process or thread currently running on a system. The number of processes that can run simultaneously on a system is limited to 4,194,304 by the Linux kernel. This number might also be affected by limited access to other system resources such as memory, CPU, and disk space.
In OKD, consider these two supported limits for process ID (PID) usage before you schedule work on your cluster:
Maximum number of PIDs per pod.
The default value is 4,096 in OKD 4.11 and later. This value is controlled by the podPidsLimit parameter set on the node.
You can view the current PID limit on a node by running the following command in a chroot environment:
sh-5.1# cat /etc/kubernetes/kubelet.conf | grep -i pids
"podPidsLimit": 4096,
You can change the podPidsLimit by using a KubeletConfig object. See "Creating a KubeletConfig CR to edit kubelet parameters".
Containers inherit the podPidsLimit value from the parent pod, so the kernel enforces the lower of the two limits. For example, if the container PID limit is set to the maximum, but the pod PID limit is 4096, the PID limit of each container in the pod is confined to 4096.
Maximum number of PIDs per node.
The default value depends on node resources. In OKD, this value is controlled by the systemReserved parameter in a kubelet configuration, which reserves PIDs on each node based on the total resources of the node. For more information, see "Allocating resources for nodes in an OKD cluster".
When a pod exceeds the allowed maximum number of PIDs per pod, the pod might stop functioning correctly and might be evicted from the node. See the Kubernetes documentation for eviction signals and thresholds for more information.
When a node exceeds the allowed maximum number of PIDs per node, the node can become unstable because new processes cannot have PIDs assigned. If existing processes cannot complete without creating additional processes, the entire node can become unusable and require reboot. This situation can result in data loss, depending on the processes and applications being run. Customer administrators and Red Hat Site Reliability Engineering are notified when this threshold is reached, and a Worker node is experiencing PIDPressure warning will appear in the cluster logs.
You can review the following information to learn about some considerations about allowing a high maximum number of processes to run on your nodes. Configuring an appropriate number of processes can help keep the nodes in your cluster running efficiently.
You can increase the value for podPidsLimit from the default of 4,096 to a maximum of 16,384. Changing this value might incur downtime for applications, because changing the podPidsLimit requires rebooting the affected node.
If you are running a large number of pods per node, and you have a high podPidsLimit value on your nodes, you risk exceeding the PID maximum for the node.
To find the maximum number of pods that you can run simultaneously on a single node without exceeding the PID maximum for the node, divide 3,650,000 by your podPidsLimit value. For example, if your podPidsLimit value is 16,384, and you expect the pods to use close to that number of process IDs, you can safely run 222 pods on a single node.
|
Memory, CPU, and available storage can also limit the maximum number of pods that can run simultaneously, even when the |
As an administrator, you can manually set system-reserved CPU and memory resources for your nodes. Setting these values ensures that your cluster is running efficiently and prevents node failure due to resource starvation of system components.
By default, OKD uses a script on each worker node to automatically determine the optimal system-reserved CPU and memory resources for nodes associated with a specific machine config pool and update the nodes with those values. The script runs on node start up.
|
If you updated your cluster from a version earlier than 4.21, automatic allocation of system resources is disabled by default. To enable the feature, delete the |
However, you can manually set these values by using a KubeletConfig custom resource (CR) that includes a set of <resource_type>=<resource_quantity> pairs (for example, cpu=200m,memory=512Mi). You must use the spec.autoSizingReserved: false parameter in the KubeletConfig CR to override the default OKD behavior of automatically setting the systemReserved values.
For memory and ephemeral-storage, you specify the resource quantity in units of bytes, such as 200Ki, 50Mi, or 5Gi. By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi. For the cpu type, you specify the resource quantity in units of cores, such as 200m, 0.5, or 1. Note that 1000 millicores is equal to 1CPU/vCPU.
|
Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command:
Create a custom resource (CR) for your configuration change.
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: set-allocatable
spec:
autoSizingReserved: false
machineConfigPoolSelector:
matchLabels:
pools.operator.machineconfiguration.openshift.io/worker: ""
kubeletConfig:
systemReserved:
cpu: 1000m
memory: 4Gi
ephemeral-storage: 50Mi
#...
metadata.nameSpecifies a name for the CR.
spec.autoSizingReservedSpecify false to override the default OKD behavior of automatically setting the systemReserved values.
spec.machineConfigPoolSelector.matchLabelsSpecifies a label from the machine config pool.
spec.kubeletConfig.systemReservedSpecifies the resources to reserve for the node components and system components.
Run the following command to create the CR:
$ oc create -f <file_name>.yaml
Log in to a node you configured by entering the following command:
$ oc debug node/<node_name>
Set /host as the root directory within the debug shell:
# chroot /host
View the /etc/node-sizing.env file:
SYSTEM_RESERVED_MEMORY=4Gi
SYSTEM_RESERVED_CPU=1000m
SYSTEM_RESERVED_ES=50Mi
The kubelet uses the system-reserved values in the /etc/node-sizing.env file.