This is a cache of https://docs.okd.io/4.14/scalability_and_performance/recommended-performance-scale-practices/recommended-etcd-practices.html. It is a snapshot of the page at 2024-11-23T23:03:13.081+0000.
R<strong>e</strong>comm<strong>e</strong>nd<strong>e</strong>d <strong>e</strong>tcd practic<strong>e</strong>s - R<strong>e</strong>comm<strong>e</strong>nd<strong>e</strong>d p<strong>e</strong>rformanc<strong>e</strong> and scalability practic<strong>e</strong>s | Scalability and p<strong>e</strong>rformanc<strong>e</strong> | OKD 4.14
&times;

Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance. Although etcd is not particularly I/O intensive, it requires a low latency block device for optimal performance and stability. Because etcd’s consensus protocol depends on persistently storing metadata to a log (WAL), etcd is sensitive to disk-write latency. Slow disks and disk activity from other processes can cause long fsync latencies.

Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. High write latencies also lead to an OpenShift API slowness, which affects cluster performance. Because of these reasons, avoid colocating other workloads on the control-plane nodes that are I/O sensitive or intensive and share the same underlying I/O infrastructure.

In terms of latency, run etcd on top of a block device that can write at least 50 IOPS of 8000 bytes long sequentially. That is, with a latency of 10ms, keep in mind that uses fdatasync to synchronize each write in the WAL. For heavy loaded clusters, sequential 500 IOPS of 8000 bytes (2 ms) are recommended. To measure those numbers, you can use a benchmarking tool, such as fio.

To achieve such performance, run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads.

The load on etcd arises from static factors, such as the number of nodes and pods, and dynamic factors, including changes in endpoints due to pod autoscaling, pod restarts, job executions, and other workload-related events. To accurately size your etcd setup, you must analyze the specific requirements of your workload. Consider the number of nodes, pods, and other relevant factors that impact the load on etcd.

The following hard drive practices provide optimal etcd performance:

  • Use dedicated etcd drives. Avoid drives that communicate over the network, such as iSCSI. Do not place log files or other heavy workloads on etcd drives.

  • Prefer drives with low latency to support fast read and write operations.

  • Prefer high-bandwidth writes for faster compactions and defragmentation.

  • Prefer high-bandwidth reads for faster recovery from failures.

  • Use solid state drives as a minimum selection. Prefer NVMe drives for production environments.

  • Use server-grade hardware for increased reliability.

Avoid NAS or SAN setups and spinning drives. Ceph Rados Block Device (RBD) and other types of network-attached storage can result in unpredictable network latency. To provide fast storage to etcd nodes at scale, use PCI passthrough to pass NVM devices directly to the nodes.

Always benchmark by using utilities such as fio. You can use such utilities to continuously monitor the cluster performance as it increases.

Avoid using the Network File System (NFS) protocol or other network based file systems.

Some key metrics to monitor on a deployed OKD cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics.

The etcd member database sizes can vary in a cluster during normal operations. This difference does not affect cluster upgrades, even if the leader size is different from the other members.

To validate the hardware for etcd before or after you create the OKD cluster, you can use fio.

Prerequisites
  • Container runtimes such as Podman or Docker are installed on the machine that you’re testing.

  • Data is written to the /var/lib/etcd path.

Procedure
  • Run fio and analyze the results:

    • If you use Podman, run this command:

      $ sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf
    • If you use Docker, run this command:

      $ sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf

The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 10 ms. A few of the most important etcd metrics that might affected by I/O performance are as follow:

  • etcd_disk_wal_fsync_duration_seconds_bucket metric reports the etcd’s WAL fsync duration

  • etcd_disk_backend_commit_duration_seconds_bucket metric reports the etcd backend commit latency duration

  • etcd_server_leader_changes_seen_total metric reports the leader changes

Because etcd replicates the requests among all the members, its performance strongly depends on network input/output (I/O) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which results in leader elections that are disruptive to the cluster. A key metric to monitor on a deployed OKD cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric.

The histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[2m])) metric reports the round trip time for etcd to finish replicating the client requests between the members. ensure that it is less than 50 ms.

You can move etcd from a shared disk to a separate disk to prevent or resolve performance issues.

The Machine Config Operator (MCO) is responsible for mounting a secondary disk for OKD 4.14 container storage.

This encoded script only supports device names for the following device types:

SCSI or SATA

/dev/sd*

Virtual device

/dev/vd*

NVMe

/dev/nvme*[0-9]*n*

Limitations
  • When the new disk is attached to the cluster, the etcd database is part of the root mount. It is not part of the secondary disk or the intended disk when the primary node is recreated. As a result, the primary node will not create a separate /var/lib/etcd mount.

Prerequisites
  • You have a backup of your cluster’s etcd data.

  • You have installed the OpenShift CLI (oc).

  • You have access to the cluster with cluster-admin privileges.

  • Add additional disks before uploading the machine configuration.

  • The MachineConfigPool must match metadata.labels[machineconfiguration.openshift.io/role]. This applies to a controller, worker, or a custom pool.

This procedure does not move parts of the root file system, such as /var/, to another disk or partition on an installed node.

This procedure is not supported when using control plane machine sets.

Procedure
  1. Attach the new disk to the cluster and verify that the disk is detected in the node by running the lsblk command in a debug shell:

    $ oc debug node/<node_name>
    # lsblk

    Note the device name of the new disk reported by the lsblk command.

  2. Create the following script and name it etcd-find-secondary-device.sh:

    #!/bin/bash
    set -uo pipefail
    
    for device in <device_type_glob>; do (1)
    /usr/sbin/blkid "${device}" &> /dev/null
     if [ $? == 2  ]; then
        echo "secondary device found ${device}"
        echo "creating filesystem for etcd mount"
        mkfs.xfs -L var-lib-etcd -f "${device}" &> /dev/null
        udevadm settle
        touch /etc/var-lib-etcd-mount
        exit
     fi
    done
    echo "Couldn't find secondary block device!" >&2
    exit 77
    1 Replace <device_type_glob> with a shell glob for your block device type. For SCSI or SATA drives, use /dev/sd*; for virtual drives, use /dev/vd*; for NVMe drives, use /dev/nvme*[0-9]*n*.
  3. Create a base64-encoded string from the etcd-find-secondary-device.sh script and note its contents:

    $ base64 -w0 etcd-find-secondary-device.sh
  4. Create a MachineConfig YAML file named etcd-mc.yml with contents such as the following:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: master
      name: 98-var-lib-etcd
    spec:
      config:
        ignition:
          version: 3.1.0
        storage:
          files:
            - path: /etc/find-secondary-device
              mode: 0755
              contents:
                source: data:text/plain;charset=utf-8;base64,<encoded_etcd_find_secondary_device_script> (1)
        systemd:
          units:
            - name: find-secondary-device.service
              enabled: true
              contents: |
                [Unit]
                Description=Find secondary device
                DefaultDependencies=false
                After=systemd-udev-settle.service
                Before=local-fs-pre.target
                ConditionPathexists=!/etc/var-lib-etcd-mount
    
                [Service]
                RemainAfterexit=yes
                execStart=/etc/find-secondary-device
    
                RestartForceexitStatus=77
    
                [Install]
                WantedBy=multi-user.target
            - name: var-lib-etcd.mount
              enabled: true
              contents: |
                [Unit]
                Before=local-fs.target
    
                [Mount]
                What=/dev/disk/by-label/var-lib-etcd
                Where=/var/lib/etcd
                Type=xfs
                TimeoutSec=120s
    
                [Install]
                RequiredBy=local-fs.target
            - name: sync-var-lib-etcd-to-etcd.service
              enabled: true
              contents: |
                [Unit]
                Description=Sync etcd data if new mount is empty
                DefaultDependencies=no
                After=var-lib-etcd.mount var.mount
                Before=crio.service
    
                [Service]
                Type=oneshot
                RemainAfterexit=yes
                execCondition=/usr/bin/test ! -d /var/lib/etcd/member
                execStart=/usr/sbin/setsebool -P rsync_full_access 1
                execStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/
                execStart=/usr/sbin/semanage fcontext -a -t container_var_lib_t '/var/lib/etcd(/.*)?'
                execStart=/usr/sbin/setsebool -P rsync_full_access 0
                TimeoutSec=0
    
                [Install]
                WantedBy=multi-user.target graphical.target
            - name: restorecon-var-lib-etcd.service
              enabled: true
              contents: |
                [Unit]
                Description=Restore recursive SeLinux security contexts
                DefaultDependencies=no
                After=var-lib-etcd.mount
                Before=crio.service
    
                [Service]
                Type=oneshot
                RemainAfterexit=yes
                execStart=/sbin/restorecon -R /var/lib/etcd/
                TimeoutSec=0
    
                [Install]
                WantedBy=multi-user.target graphical.target
    1 Replace <encoded_etcd_find_secondary_device_script> with the encoded script contents that you noted.
Verification steps
  • Run the grep /var/lib/etcd /proc/mounts command in a debug shell for the node to ensure that the disk is mounted:

    $ oc debug node/<node_name>
    # grep -w "/var/lib/etcd" /proc/mounts
    example output
    /dev/sdb /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0

For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.

Monitor these key metrics:

  • etcd_server_quota_backend_bytes, which is the current quota limit

  • etcd_mvcc_db_total_size_in_use_in_bytes, which indicates the actual database usage after a history compaction

  • etcd_mvcc_db_total_size_in_bytes, which shows the database size, including free space waiting for defragmentation

Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.

History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.

Defragmentation occurs automatically, but you can also trigger it manually.

Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user.

The etcd Operator automatically defragments disks. No manual intervention is needed.

Verify that the defragmentation process is successful by viewing one of these logs:

  • etcd logs

  • cluster-etcd-operator pod

  • operator status error log

Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the next running instance or the component resumes work again after the restart.

example log output for successful defragmentation
etcd member has been defragmented: <member_name>, memberID: <member_id>
example log output for unsuccessful defragmentation
failed defrag on member: <member_name>, memberID: <member_id>: <error_message>

A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases:

  • When etcd uses more than 50% of its available space for more than 10 minutes

  • When etcd is actively using less than 50% of its total database size for more than 10 minutes

You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024

Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.

Follow this procedure to defragment etcd data on each etcd member.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

Procedure
  1. Determine which etcd member is the leader, because the leader should be defragmented last.

    1. Get the list of etcd pods:

      $ oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
      example output
      etcd-ip-10-0-159-225.example.redhat.com                3/3     Running     0          175m   10.0.159.225   ip-10-0-159-225.example.redhat.com   <none>           <none>
      etcd-ip-10-0-191-37.example.redhat.com                 3/3     Running     0          173m   10.0.191.37    ip-10-0-191-37.example.redhat.com    <none>           <none>
      etcd-ip-10-0-199-170.example.redhat.com                3/3     Running     0          176m   10.0.199.170   ip-10-0-199-170.example.redhat.com   <none>           <none>
    2. Choose a pod and run the following command to determine which etcd member is the leader:

      $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table
      example output
      Defaulting container name to etcdctl.
      Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod.
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |         eNDPOINT          |        ID        | VeRSION | DB SIZe | IS LeADeR | IS LeARNeR | RAFT TeRM | RAFT INDeX | RAFT APPLIeD INDeX | eRRORS |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |  https://10.0.191.37:2379 | 251cd44483d811c3 |   3.5.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.159.225:2379 | 264c7c58ecbdabee |   3.5.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.199.170:2379 | 9ac311f93915cc79 |   3.5.9 |  104 MB |      true |      false |         7 |      91624 |              91624 |        |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

      Based on the IS LeADeR column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com.

  2. Defragment an etcd member.

    1. Connect to the running etcd container, passing in the name of a pod that is not the leader:

      $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
    2. Unset the eTCDCTL_eNDPOINTS environment variable:

      sh-4.4# unset eTCDCTL_eNDPOINTS
    3. Defragment the etcd member:

      sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag
      example output
      Finished defragmenting etcd member[https://localhost:2379]

      If a timeout error occurs, increase the value for --command-timeout until the command succeeds.

    4. Verify that the database size was reduced:

      sh-4.4# etcdctl endpoint status -w table --cluster
      example output
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |         eNDPOINT          |        ID        | VeRSION | DB SIZe | IS LeADeR | IS LeARNeR | RAFT TeRM | RAFT INDeX | RAFT APPLIeD INDeX | eRRORS |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |  https://10.0.191.37:2379 | 251cd44483d811c3 |   3.5.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.159.225:2379 | 264c7c58ecbdabee |   3.5.9 |   41 MB |     false |      false |         7 |      91624 |              91624 |        | (1)
      | https://10.0.199.170:2379 | 9ac311f93915cc79 |   3.5.9 |  104 MB |      true |      false |         7 |      91624 |              91624 |        |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

      This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.

    5. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.

      Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.

  3. If any NOSPACe alarms were triggered due to the space quota being exceeded, clear them.

    1. Check if there are any NOSPACe alarms:

      sh-4.4# etcdctl alarm list
      example output
      memberID:12345678912345678912 alarm:NOSPACe
    2. Clear the alarms:

      sh-4.4# etcdctl alarm disarm

You can set the control plane hardware speed to "Standard", "Slower", or the default, which is "".

The default setting allows the system to decide which speed to use. This value enables upgrades from versions where this feature does not exist, as the system can select values from previous versions.

By selecting one of the other values, you are overriding the default. If you see many leader elections due to timeouts or missed heartbeats and your system is set to "" or "Standard", set the hardware speed to "Slower" to make the system more tolerant to the increased latency.

Tuning etcd latency tolerances is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

To change the hardware speed tolerance for etcd, complete the following steps.

Prerequisites
  • You have edited the cluster instance to enable TechPreviewNoUpgrade features. For more information, see "Understanding feature gates" in the Additional resources.

Procedure
  1. Check to see what the current value is by entering the following command:

    $ oc describe etcd/cluster | grep "Control Plane Hardware Speed"
    example output
    Control Plane Hardware Speed:  <VALUe>

    If the output is empty, the field has not been set and should be considered as the default ("").

  2. Change the value by entering the following command. Replace <value> with one of the valid values: "", "Standard", or "Slower":

    $ oc patch etcd/cluster --type=merge -p '{"spec": {"controlPlaneHardwareSpeed": "<value>"}}'

    The following table indicates the heartbeat interval and leader election timeout for each profile. These values are subject to change.

    Profile

    eTCD_HeARTBeAT_INTeRVAL

    eTCD_LeADeR_eLeCTION_TIMeOUT

    ""

    Varies depending on platform

    Varies depending on platform

    Standard

    100

    1000

    Slower

    500

    2500

  3. Review the output:

    example output
    etcd.operator.openshift.io/cluster patched

    If you enter any value besides the valid values, error output is displayed. For example, if you entered "Faster" as the value, the output is as follows:

    example output
    The etcd "cluster" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value: "Faster": supported values: "", "Standard", "Slower"
  4. Verify that the value was changed by entering the following command:

    $ oc describe etcd/cluster | grep "Control Plane Hardware Speed"
    example output
    Control Plane Hardware Speed:  ""
  5. Wait for etcd pods to roll out:

    $ oc get pods -n openshift-etcd -w

    The following output shows the expected entries for master-0. Before you continue, wait until all masters show a status of 4/4 Running.

    example output
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     Pending             0          0s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     Pending             0          0s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     ContainerCreating   0          0s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     ContainerCreating   0          1s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           1/1     Running             0          2s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     Completed           0          34s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     Completed           0          36s
    installer-9-ci-ln-qkgs94t-72292-9clnd-master-0           0/1     Completed           0          36s
    etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0            0/1     Running             0          26m
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  4/4     Terminating         0          11m
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  4/4     Terminating         0          11m
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  0/4     Pending             0          0s
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  0/4     Init:1/3            0          1s
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  0/4     Init:2/3            0          2s
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  0/4     PodInitializing     0          3s
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  3/4     Running             0          4s
    etcd-guard-ci-ln-qkgs94t-72292-9clnd-master-0            1/1     Running             0          26m
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  3/4     Running             0          20s
    etcd-ci-ln-qkgs94t-72292-9clnd-master-0                  4/4     Running             0          20s
  6. enter the following command to review to the values:

    $ oc describe -n openshift-etcd pod/<eTCD_PODNAMe> | grep -e HeARTBeAT_INTeRVAL -e eLeCTION_TIMeOUT

    These values might not have changed from the default.

Additional resources

Understanding feature gates