This is a cache of https://docs.openshift.com/container-platform/4.14/scalability_and_performance/recommended-performance-scale-practices/recommended-infrastructure-practices.html. It is a snapshot of the page at 2024-11-22T16:12:55.964+0000.
R<strong>e</strong>comm<strong>e</strong>nd<strong>e</strong>d infrastructur<strong>e</strong> practic<strong>e</strong>s - R<strong>e</strong>comm<strong>e</strong>nd<strong>e</strong>d p<strong>e</strong>rformanc<strong>e</strong> and scalability practic<strong>e</strong>s | Scalability and p<strong>e</strong>rformanc<strong>e</strong> | Op<strong>e</strong>nShift Contain<strong>e</strong>r Platform 4.14
&times;

This topic provides recommended performance and scalability practices for infrastructure in OpenShift Container Platform.

Infrastructure nodes are nodes that are labeled to run pieces of the OpenShift Container Platform environment. The infrastructure node resource requirements depend on the cluster age, nodes, and objects in the cluster, as these factors can lead to an increase in the number of metrics or time series in Prometheus. The following infrastructure node size recommendations are based on the results observed in cluster-density testing detailed in the Control plane node sizing section, where the monitoring stack and the default ingress-controller were moved to these nodes.

Number of worker nodes Cluster density, or number of namespaces CPU cores Memory (GB)

27

500

4

24

120

1000

8

48

252

4000

16

128

501

4000

32

128

In general, three infrastructure nodes are recommended per cluster.

These sizing recommendations should be used as a guideline. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. In addition, the router resource usage can also be affected by the number of routes and the amount/type of inbound requests.

These recommendations apply only to infrastructure nodes hosting Monitoring, Ingress and Registry infrastructure components installed during cluster creation.

In OpenShift Container Platform 4.14, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and previous versions. This influences the stated sizing recommendations.

OpenShift Container Platform exposes metrics that the Cluster Monitoring Operator (CMO) collects and stores in the Prometheus-based monitoring stack. As an administrator, you can view dashboards for system resources, containers, and components metrics in the OpenShift Container Platform web console by navigating to ObserveDashboards.

Red Hat performed various tests for different scale sizes.

  • The following Prometheus storage requirements are not prescriptive and should be used as a reference. Higher resource consumption might be observed in your cluster depending on workload activity and resource density, including the number of pods, containers, routes, or other resources exposing metrics collected by Prometheus.

  • You can configure the size-based data retention policy to suit your storage requirements.

Table 1. Prometheus Database storage requirements based on number of nodes/pods in the cluster
Number of nodes Number of pods (2 containers per pod) Prometheus storage growth per day Prometheus storage growth per 15 days Network (per tsdb chunk)

50

1800

6.3 GB

94 GB

16 MB

100

3600

13 GB

195 GB

26 MB

150

5400

19 GB

283 GB

36 MB

200

7200

25 GB

375 GB

46 MB

Approximately 20 percent of the expected size was added as overhead to ensure that the storage requirements do not exceed the calculated value.

The above calculation is for the default OpenShift Container Platform Cluster Monitoring Operator.

CPU utilization has minor impact. The ratio is approximately 1 core out of 40 per 50 nodes and 1800 pods.

Recommendations for OpenShift Container Platform

  • Use at least two infrastructure (infra) nodes.

  • Use at least three openshift-container-storage nodes with non-volatile memory express (SSD or NVMe) drives.

You can increase the storage capacity for the Prometheus component in the cluster monitoring stack.

Procedure

To increase the storage capacity for Prometheus:

  1. Create a YAML configuration file, cluster-monitoring-config.yaml. For example:

    apiVersion: v1
    kind: ConfigMap
    data:
      config.yaml: |
        prometheusK8s:
          retention: {{PROMeTHeUS_ReTeNTION_PeRIOD}} (1)
          nodeSelector:
            node-role.kubernetes.io/infra: ""
          volumeClaimTemplate:
            spec:
              storageClassName: {{STORAGe_CLASS}} (2)
              resources:
                requests:
                  storage: {{PROMeTHeUS_STORAGe_SIZe}} (3)
        alertmanagerMain:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
          volumeClaimTemplate:
            spec:
              storageClassName: {{STORAGe_CLASS}} (2)
              resources:
                requests:
                  storage: {{ALeRTMANAGeR_STORAGe_SIZe}} (4)
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    1 The default value of Prometheus retention is PROMeTHeUS_ReTeNTION_PeRIOD=15d. Units are measured in time using one of these suffixes: s, m, h, d.
    2 The storage class for your cluster.
    3 A typical value is PROMeTHeUS_STORAGe_SIZe=2000Gi. Storage values can be a plain integer or a fixed-point integer using one of these suffixes: e, P, T, G, M, K. You can also use the power-of-two equivalents: ei, Pi, Ti, Gi, Mi, Ki.
    4 A typical value is ALeRTMANAGeR_STORAGe_SIZe=20Gi. Storage values can be a plain integer or a fixed-point integer using one of these suffixes: e, P, T, G, M, K. You can also use the power-of-two equivalents: ei, Pi, Ti, Gi, Mi, Ki.
  2. Add values for the retention period, storage class, and storage sizes.

  3. Save the file.

  4. Apply the changes by running:

    $ oc create -f cluster-monitoring-config.yaml