Maximum Pods per Cluster / Expected Pods per Node = Total Number of Nodes
This topic summarizes the limits for objects in OpenShift Container Platform.
In most cases, exceeding these thresholds results in lower overall performance. It does not necessarily mean that the cluster will fail.
Some of the limits represented in this topic are given for the largest possible cluster. For smaller clusters, the limits are proportionally lower.
There are many factors that influence the stated thresholds, including the etcd version or storage data format.
Limit Type | 3.7 Limit | 3.9 Limit | 3.10 Limit |
---|---|---|---|
Number of nodes [1] |
2,000 |
2,000 |
2,000 |
Number of pods [2] |
120,000 |
120,000 |
150,000 |
Number of pods per node |
250 |
250 |
250 |
Number of pods per core |
10 is the default value. The maximum supported value is the number of pods per node. |
10 is the default value. The maximum supported value is the number of pods per node. |
There is no default value. The maximum supported value is the number of pods per node. |
Number of namespaces |
10,000 |
10,000 |
10,000 |
Number of builds: Pipeline Strategy |
N/A |
10,000 (Default pod RAM 512Mi) |
10,000 (Default pod RAM 512Mi) |
Number of pods per namespace [3] |
3,000 |
3,000 |
3,000 |
Number of services [4] |
10,000 |
10,000 |
10,000 |
Number of services per namespace |
N/A |
N/A |
5,000 |
Number of back-ends per service |
5,000 |
5,000 |
5,000 |
Number of deployments per namespace [3] |
2,000 |
2,000 |
2,000 |
Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. |
While planning your environment, determine how many pods are expected to fit per node:
Maximum Pods per Cluster / Expected Pods per Node = Total Number of Nodes
The number of pods expected to fit on a node is dependent on the application itself. Consider the application’s memory, CPU, and storage requirements.
If you want to scope your cluster for 2200 pods per cluster, you would need at least nine nodes, assuming that there are 250 maximum pods per node:
2200 / 250 = 8.8
If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node:
2200 / 20 = 110
Consider an example application environment:
Pod Type | Pod Quantity | Max Memory | CPU Cores | Persistent Storage |
---|---|---|---|---|
apache |
100 |
500MB |
0.5 |
1GB |
node.js |
200 |
1GB |
1 |
1GB |
postgresql |
100 |
1GB |
2 |
10GB |
JBoss EAP |
100 |
1GB |
1 |
1GB |
Extrapolated requirements: 550 CPU cores, 450GB RAM, and 1.4TB storage.
Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered.
Node Type | Quantity | CPUs | RAM (GB) |
---|---|---|---|
Nodes (option 1) |
100 |
4 |
16 |
Nodes (option 2) |
50 |
8 |
32 |
Nodes (option 3) |
25 |
16 |
64 |
Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio.