kubeletConfig:
podsPerCore: 10
This topic provides recommended host practices for OpenShift Container Platform.
These guidelines apply to OpenShift Container Platform with software-defined networking (SDN), not Open Virtual Network (OVN). |
The OpenShift Container Platform node configuration file contains important options. For
example, two parameters control the maximum number of pods that can be scheduled
to a node: podsPerCore
and maxPods
.
When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in:
Increased CPU utilization.
Slow pod scheduling.
Potential out-of-memory scenarios, depending on the amount of memory in the node.
Exhausting the pool of IP addresses.
Resource overcommitting, leading to poor user application performance.
In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. |
Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. |
podsPerCore
sets the number of pods the node can run based on the number of
processor cores on the node. For example, if podsPerCore
is set to 10
on a
node with 4 processor cores, the maximum number of pods allowed on the node will
be 40
.
kubeletConfig:
podsPerCore: 10
Setting podsPerCore
to 0
disables this limit. The default is 0
.
podsPerCore
cannot exceed maxPods
.
maxPods
sets the number of pods the node can run to a fixed value, regardless
of the properties of the node.
kubeletConfig:
maxPods: 250
The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller
added to the Machine Config Controller (MCC). This lets you use a KubeletConfig
custom resource (CR) to edit the kubelet parameters.
As the fields in the |
Consider the following guidance:
Create one KubeletConfig
CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all of the pools, you need only one KubeletConfig
CR for all of the pools.
Edit an existing KubeletConfig
CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes.
As needed, create multiple KubeletConfig
CRs with a limit of 10 per cluster. For the first KubeletConfig
CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet
. With each subsequent CR, the controller creates another kubelet
machine config with a numeric suffix. For example, if you have a kubelet
machine config with a -2
suffix, the next kubelet
machine config is appended with -3
.
If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3
machine config before deleting the kubelet-2
machine config.
If you have a machine config with a |
KubeletConfig
CR$ oc get kubeletconfig
NAME AGE
set-max-pods 15m
KubeletConfig
machine config$ oc get mc | grep kubelet
...
99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m
...
The following procedure is an example to show how to configure the maximum number of pods per node on the worker nodes.
Obtain the label associated with the static MachineConfigPool
CR for the type of node you want to configure.
Perform one of the following steps:
View the machine config pool:
$ oc describe machineconfigpool <name>
For example:
$ oc describe machineconfigpool worker
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
creationTimestamp: 2019-02-08T14:52:39Z
generation: 1
labels:
custom-kubelet: set-max-pods (1)
1 | If a label has been added it appears under labels . |
If the label is not present, add a key/value pair:
$ oc label machineconfigpool worker custom-kubelet=set-max-pods
View the available machine configuration objects that you can select:
$ oc get machineconfig
By default, the two kubelet-related configs are 01-master-kubelet
and 01-worker-kubelet
.
Check the current value for the maximum pods per node:
$ oc describe node <node_name>
For example:
$ oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94
Look for value: pods: <value>
in the Allocatable
stanza:
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 3500m
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 15341844Ki
pods: 250
Set the maximum pods per node on the worker nodes by creating a custom resource file that contains the kubelet configuration:
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: set-max-pods
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: set-max-pods (1)
kubeletConfig:
maxPods: 500 (2)
1 | Enter the label from the machine config pool. |
2 | Add the kubelet configuration. In this example, use maxPods to set the maximum pods per node. |
The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values,
|
Update the machine config pool for workers with the label:
$ oc label machineconfigpool worker custom-kubelet=large-pods
Create the KubeletConfig
object:
$ oc create -f change-maxPods-cr.yaml
Verify that the KubeletConfig
object is created:
$ oc get kubeletconfig
NAME AGE
set-max-pods 15m
Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes.
Verify that the changes are applied to the node:
Check on a worker node that the maxPods
value changed:
$ oc describe node <node_name>
Locate the Allocatable
stanza:
...
Allocatable:
attachable-volumes-gce-pd: 127
cpu: 3500m
ephemeral-storage: 123201474766
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 14225400Ki
pods: 500 (1)
...
1 | In this example, the pods parameter should report the value you set in the KubeletConfig object. |
Verify the change in the KubeletConfig
object:
$ oc get kubeletconfigs set-max-pods -o yaml
This should show a status of True
and type:Success
, as shown in the following example:
spec:
kubeletConfig:
maxPods: 500
machineConfigPoolSelector:
matchLabels:
custom-kubelet: set-max-pods
status:
conditions:
- lastTransitionTime: "2021-06-30T17:04:07Z"
message: Success
status: "True"
type: Success
By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process.
Edit the worker
machine config pool:
$ oc edit machineconfigpool worker
Set maxUnavailable
to the value that you want:
spec:
maxUnavailable: <node_count>
When setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster. |
The control plane node resource requirements depend on the number of nodes in the cluster. The following control plane node size recommendations are based on the results of control plane density focused testing. The control plane tests create the following objects across the cluster in each of the namespaces depending on the node counts:
12 image streams
3 build configurations
6 builds
1 deployment with 2 pod replicas mounting two secrets each
2 deployments with 1 pod replica mounting two secrets
3 services pointing to the previous deployments
3 routes pointing to the previous deployments
10 secrets, 2 of which are mounted by the previous deployments
10 config maps, 2 of which are mounted by the previous deployments
Number of worker nodes | Cluster load (namespaces) | CPU cores | Memory (GB) |
---|---|---|---|
25 |
500 |
4 |
16 |
100 |
1000 |
8 |
32 |
250 |
4000 |
16 |
96 |
On a large and dense cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails. The failures can be due to unexpected issues with power, network or underlying infrastructure in addition to intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available which leads to increase in the resource usage. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources.
The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the |
Operator Lifecycle Manager (OLM ) runs on the control plane nodes and it’s memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing.
Number of namespaces | OLM memory at idle state (GB) | OLM memory with 5 user operators installed (GB) |
---|---|---|
500 |
0.823 |
1.7 |
1000 |
1.2 |
2.5 |
1500 |
1.7 |
3.2 |
2000 |
2 |
4.4 |
3000 |
2.7 |
5.6 |
4000 |
3.8 |
7.6 |
5000 |
4.2 |
9.02 |
6000 |
5.8 |
11.3 |
7000 |
6.6 |
12.9 |
8000 |
6.9 |
14.8 |
9000 |
8 |
17.7 |
10,000 |
9.9 |
21.6 |
You can modify the control plane node size in a running OpenShift Container Platform 4.9 cluster for the following configurations only:
For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation. |
The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShift SDN as the network plugin. |
In OpenShift Container Platform 4.9, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and previous versions. The sizes are determined taking that into consideration. |
If the control plane machines in an Amazon Web Services (AWS) cluster require more resources, you can select a larger AWS instance type for the control plane machines to use.
You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the instance type in the AWS console.
You have access to the AWS console with the permissions required to modify the EC2 Instance for your cluster.
You have access to the OpenShift Container Platform cluster as a user with the cluster-admin
role.
Open the AWS console and fetch the instances for the control plane machines.
Choose one control plane machine instance.
For the selected control plane machine, back up the etcd data by creating an etcd snapshot. For more information, see "Backing up etcd".
In the AWS console, stop the control plane machine instance.
Select the stopped instance, and click Actions → Instance Settings → Change instance type.
Change the instance to a larger type, ensuring that the type is the same base as the previous selection, and apply changes. For example, you can change m6i.xlarge
to m6i.2xlarge
or m6i.4xlarge
.
Start the instance.
If your OpenShift Container Platform cluster has a corresponding Machine
object for the instance, update the instance type of the object to match the instance type set in the AWS console.
Repeat this process for each control plane machine.
Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance. Although etcd is not particularly I/O intensive, it requires a low latency block device for optimal performance and stability. Because etcd’s consensus protocol depends on persistently storing metadata to a log (WAL), etcd is sensitive to disk-write latency. Slow disks and disk activity from other processes can cause long fsync latencies.
Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. High write latencies also lead to an OpenShift API slowness, which affects cluster performance. Because of these reasons, avoid colocating other workloads on the control-plane nodes.
In terms of latency, run etcd on top of a block device that can write at least 50 IOPS of 8000 bytes long sequentially. That is, with a latency of 20ms, keep in mind that uses fdatasync to synchronize each write in the WAL. For heavy loaded clusters, sequential 500 IOPS of 8000 bytes (2 ms) are recommended. To measure those numbers, you can use a benchmarking tool, such as fio.
To achieve such performance, run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads.
The following hard disk features provide optimal etcd performance:
Low latency to support fast read operation.
High-bandwidth writes for faster compactions and defragmentation.
High-bandwidth reads for faster recovery from failures.
Solid state drives as a minimum selection, however NVMe drives are preferred.
Server-grade hardware from various manufacturers for increased reliability.
RAID 0 technology for increased performance.
Dedicated etcd drives. Do not place log files or other heavy workloads on etcd drives.
Avoid NAS or SAN setups and spinning drives. Always benchmark by using utilities such as fio. Continuously monitor the cluster performance as it increases.
Avoid using the Network File System (NFS) protocol or other network based file systems. |
Some key metrics to monitor on a deployed OpenShift Container Platform cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics.
To validate the hardware for etcd before or after you create the OpenShift Container Platform cluster, you can use fio.
Container runtimes such as Podman or Docker are installed on the machine that you’re testing.
Data is written to the /var/lib/etcd
path.
Run fio and analyze the results:
If you use Podman, run this command:
$ sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf
If you use Docker, run this command:
$ sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf
The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 20 ms. A few of the most important etcd metrics that might affected by I/O performance are as follow:
etcd_disk_wal_fsync_duration_seconds_bucket
metric reports the etcd’s WAL fsync duration
etcd_disk_backend_commit_duration_seconds_bucket
metric reports the etcd backend commit latency duration
etcd_server_leader_changes_seen_total
metric reports the leader changes
Because etcd replicates the requests among all the members, its performance strongly depends on network input/output (I/O) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which results in leader elections that are disruptive to the cluster. A key metric to monitor on a deployed OpenShift Container Platform cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric.
The histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[2m]))
metric reports the round trip time for etcd to finish replicating the client requests between the members. Ensure that it is less than 50 ms.
You can move etcd from a shared disk to a separate disk to prevent or resolve performance issues.
The MachineConfigPool
must match metadata.labels[machineconfiguration.openshift.io/role]
. This applies to a controller, worker, or a custom pool.
The node’s auxiliary storage device, such as /dev/sdb
, must match the sdb. Change this reference in all places in the file.
This procedure does not move parts of the root file system, such as |
The Machine Config Operator (MCO) is responsible for mounting a secondary disk for an OpenShift Container Platform 4.9 container storage.
Use the following steps to move etcd to a different device:
Create a machineconfig
YAML file named etcd-mc.yml
and add the following information:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 98-var-lib-etcd
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Make File System on /dev/sdb
DefaultDependencies=no
BindsTo=dev-sdb.device
After=dev-sdb.device var.mount
Before=systemd-fsck@dev-sdb.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/lib/systemd/systemd-makefs xfs /dev/sdb
TimeoutSec=0
[Install]
WantedBy=var-lib-containers.mount
enabled: true
name: systemd-mkfs@dev-sdb.service
- contents: |
[Unit]
Description=Mount /dev/sdb to /var/lib/etcd
Before=local-fs.target
Requires=systemd-mkfs@dev-sdb.service
After=systemd-mkfs@dev-sdb.service var.mount
[Mount]
What=/dev/sdb
Where=/var/lib/etcd
Type=xfs
Options=defaults,prjquota
[Install]
WantedBy=local-fs.target
enabled: true
name: var-lib-etcd.mount
- contents: |
[Unit]
Description=Sync etcd data if new mount is empty
DefaultDependencies=no
After=var-lib-etcd.mount var.mount
Before=crio.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member
ExecStart=/usr/sbin/setenforce 0
ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/
ExecStart=/usr/sbin/setenforce 1
TimeoutSec=0
[Install]
WantedBy=multi-user.target graphical.target
enabled: true
name: sync-var-lib-etcd-to-etcd.service
- contents: |
[Unit]
Description=Restore recursive SELinux security contexts
DefaultDependencies=no
After=var-lib-etcd.mount
Before=crio.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/restorecon -R /var/lib/etcd/
TimeoutSec=0
[Install]
WantedBy=multi-user.target graphical.target
enabled: true
name: restorecon-var-lib-etcd.service
Create the machine configuration by entering the following commands:
$ oc login -u ${ADMIN} -p ${ADMINPASSWORD} ${API}
... output omitted ...
$ oc create -f etcd-mc.yml
machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created
$ oc login -u ${ADMIN} -p ${ADMINPASSWORD} ${API}
[... output omitted ...]
$ oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created
The nodes are updated and rebooted. After the reboot completes, the following events occur:
An XFS file system is created on the specified disk.
The disk mounts to /var/lib/etc
.
The content from /sysroot/ostree/deploy/rhcos/var/lib/etcd
syncs to /var/lib/etcd
.
A restore of SELinux
labels is forced for /var/lib/etcd
.
The old content is not removed.
After the nodes are on a separate disk, update the machine configuration file, etcd-mc.yml
with the following information:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 98-var-lib-etcd
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Mount /dev/sdb to /var/lib/etcd
Before=local-fs.target
Requires=systemd-mkfs@dev-sdb.service
After=systemd-mkfs@dev-sdb.service var.mount
[Mount]
What=/dev/sdb
Where=/var/lib/etcd
Type=xfs
Options=defaults,prjquota
[Install]
WantedBy=local-fs.target
enabled: true
name: var-lib-etcd.mount
Apply the modified version that removes the logic for creating and syncing the device by entering the following command:
$ oc replace -f etcd-mc.yml
The previous step prevents the nodes from rebooting.
For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.
Monitor these key metrics:
etcd_server_quota_backend_bytes
, which is the current quota limit
etcd_mvcc_db_total_size_in_use_in_bytes
, which indicates the actual database usage after a history compaction
etcd_mvcc_db_total_size_in_bytes
, which shows the database size, including free space waiting for defragmentation
Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.
History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.
Defragmentation occurs automatically, but you can also trigger it manually.
Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user. |
The etcd Operator automatically defragments disks. No manual intervention is needed.
Verify that the defragmentation process is successful by viewing one of these logs:
etcd logs
cluster-etcd-operator pod
operator status error log
Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the next running instance or the component resumes work again after the restart. |
I0907 08:43:12.171919 1
defragcontroller.go:198] etcd member "ip-
10-0-191-150.example.redhat.com"
backend store fragmented: 39.33 %, dbSize:
349138944
You can monitor the etcd_db_total_size_in_bytes
metric to determine whether manual defragmentation is necessary.
You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024
Defragmenting etcd is a blocking action. The etcd member will not response until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. |
Follow this procedure to defragment etcd data on each etcd member.
You have access to the cluster as a user with the cluster-admin
role.
Determine which etcd member is the leader, because the leader should be defragmented last.
Get the list of etcd pods:
$ oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none>
etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none>
etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>
Choose a pod and run the following command to determine which etcd member is the leader:
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table
Defaulting container name to etcdctl.
Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod.
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
| https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
| https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
Based on the IS LEADER
column of this output, the https://10.0.199.170:2379
endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com
.
Defragment an etcd member.
Connect to the running etcd container, passing in the name of a pod that is not the leader:
$ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
Unset the ETCDCTL_ENDPOINTS
environment variable:
sh-4.4# unset ETCDCTL_ENDPOINTS
Defragment the etcd member:
sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
Finished defragmenting etcd member[https://localhost:2379]
If a timeout error occurs, increase the value for --command-timeout
until the command succeeds.
Verify that the database size was reduced:
sh-4.4# etcdctl endpoint status -w table --cluster
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
| https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | (1)
| https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.
Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.
Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.
If any NOSPACE
alarms were triggered due to the space quota being exceeded, clear them.
Check if there are any NOSPACE
alarms:
sh-4.4# etcdctl alarm list
memberID:12345678912345678912 alarm:NOSPACE
Clear the alarms:
sh-4.4# etcdctl alarm disarm
The following infrastructure workloads do not incur OpenShift Container Platform worker subscriptions:
Kubernetes and OpenShift Container Platform control plane services that run on masters
The default router
The integrated container image registry
The HAProxy-based Ingress Controller
The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects
Cluster aggregated logging
Service brokers
Red Hat Quay
Red Hat OpenShift Container Storage
Red Hat Advanced Cluster Manager
Red Hat Advanced Cluster Security for Kubernetes
Red Hat OpenShift GitOps
Red Hat OpenShift Pipelines
Any node that runs any other container, pod, or component is a worker node that your subscription must cover.
For information on infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document.
The monitoring stack includes multiple components, including Prometheus, Grafana, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map.
Edit the cluster-monitoring-config
config map and change the nodeSelector
to use the infra
label:
$ oc edit configmap cluster-monitoring-config -n openshift-monitoring
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |+
alertmanagerMain:
nodeSelector: (1)
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
prometheusK8s:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
prometheusOperator:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
grafana:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
k8sPrometheusAdapter:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
kubeStateMetrics:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
telemeterClient:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
openshiftStateMetrics:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
thanosQuerier:
nodeSelector:
node-role.kubernetes.io/infra: ""
tolerations:
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoSchedule
- key: node-role.kubernetes.io/infra
value: reserved
effect: NoExecute
1 | Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. |
Watch the monitoring pods move to the new machines:
$ watch 'oc get pod -n openshift-monitoring -o wide'
If a component has not moved to the infra
node, delete the pod with this component:
$ oc delete pod -n openshift-monitoring <pod>
The component from the deleted pod is re-created on the infra
node.
You configure the registry Operator to deploy its pods to different nodes.
Configure additional machine sets in your OpenShift Container Platform cluster.
View the config/instance
object:
$ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
creationTimestamp: 2019-02-05T13:52:05Z
finalizers:
- imageregistry.operator.openshift.io/finalizer
generation: 1
name: cluster
resourceVersion: "56174"
selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
uid: 36fd3724-294d-11e9-a524-12ffeee2931b
spec:
httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
logging: 2
managementState: Managed
proxy: {}
replicas: 1
requests:
read: {}
write: {}
storage:
s3:
bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
region: us-east-1
status:
...
Edit the config/instance
object:
$ oc edit configs.imageregistry.operator.openshift.io/cluster
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
namespaces:
- openshift-image-registry
topologyKey: kubernetes.io/hostname
weight: 100
logLevel: Normal
managementState: Managed
nodeSelector: (1)
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
1 | Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. |
Verify the registry pod has been moved to the infrastructure node.
Run the following command to identify the node where the registry pod is located:
$ oc get pods -o wide -n openshift-image-registry
Confirm the node has the label you specified:
$ oc describe node <node_name>
Review the command output and confirm that node-role.kubernetes.io/infra
is in the LABELS
list.
You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node.
Configure additional machine sets in your OpenShift Container Platform cluster.
View the IngressController
custom resource for the router Operator:
$ oc get ingresscontroller default -n openshift-ingress-operator -o yaml
The command output resembles the following text:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
creationTimestamp: 2019-04-18T12:35:39Z
finalizers:
- ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
generation: 1
name: default
namespace: openshift-ingress-operator
resourceVersion: "11341"
selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
uid: 79509e05-61d6-11e9-bc55-02ce4781844a
spec: {}
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2019-04-18T12:36:15Z
status: "True"
type: Available
domain: apps.<cluster>.example.com
endpointPublishingStrategy:
type: LoadBalancerService
selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
Edit the ingresscontroller
resource and change the nodeSelector
to use the infra
label:
$ oc edit ingresscontroller default -n openshift-ingress-operator
spec:
nodePlacement:
nodeSelector: (1)
matchLabels:
node-role.kubernetes.io/infra: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/infra
value: reserved
- effect: NoExecute
key: node-role.kubernetes.io/infra
value: reserved
1 | Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. |
Confirm that the router pod is running on the infra
node.
View the list of router pods and note the node name of the running pod:
$ oc get pod -n openshift-ingress -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none>
router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>
In this example, the running pod is on the ip-10-0-217-226.ec2.internal
node.
View the node status of the running pod:
$ oc get node <node_name> (1)
1 | Specify the <node_name> that you obtained from the pod list. |
NAME STATUS ROLES AGE VERSION
ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.22.1
Because the role list includes infra
, the pod is running on the correct node.
Infrastructure nodes are nodes that are labeled to run pieces of the OpenShift Container Platform environment. The infrastructure node resource requirements depend on the cluster age, nodes, and objects in the cluster, as these factors can lead to an increase in the number of metrics or time series in Prometheus. The following infrastructure node size recommendations are based on the results of cluster maximums and control plane density focused testing.
Number of worker nodes | CPU cores | Memory (GB) |
---|---|---|
25 |
4 |
16 |
100 |
8 |
32 |
250 |
16 |
128 |
500 |
32 |
128 |
In general, three infrastructure nodes are recommended per cluster.
These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on an OpenShift Container Platform 4.9 cluster, these maximums are 10000 namespaces with 61000 pods, 10000 deployments, 181000 secrets, 400 config maps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly. These sizing recommendations are only applicable for the Prometheus, router, and Registry infrastructure components, which are installed during cluster installation. Logging is a day-two operation and is not included in these recommendations. |
In OpenShift Container Platform 4.9, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and previous versions. This influences the stated sizing recommendations. |