This is a cache of https://docs.openshift.com/container-platform/4.3/logging/config/cluster-logging-elasticsearch.html. It is a snapshot of the page at 2024-11-23T02:09:36.280+0000.
Configuring <strong>e</strong>lastics<strong>e</strong>arch - Configuring your clust<strong>e</strong>r logging d<strong>e</strong>ploym<strong>e</strong>nt | Logging | Op<strong>e</strong>nShift Contain<strong>e</strong>r Platform 4.3
&times;

OpenShift Container Platform uses elasticsearch (eS) to store and organize the log data.

Some of the modifications you can make to your log store include:

  • storage for your elasticsearch cluster;

  • how shards are replicated across data nodes in the cluster, from full replication to no replication;

  • allowing external access to elasticsearch data.

Scaling down elasticsearch nodes is not supported. When scaling down, elasticsearch pods can be accidentally deleted, possibly resulting in shards not being allocated and replica shards being lost.

elasticsearch is a memory-intensive application. each elasticsearch node needs 16G of memory for both memory requests and limits, unless you specify otherwise in the Cluster Logging Custom Resource. The initial set of OpenShift Container Platform nodes might not be large enough to support the elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory.

each elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments.

If you set the elasticsearch Operator (eO) to unmanaged and leave the Cluster Logging Operator (CLO) as managed, the CLO will revert changes you make to the eO, as the eO is managed by the CLO.

Configuring elasticsearch CPU and memory limits

each component specification allows for adjustments to both the CPU and memory limits. You should not have to manually adjust these values as the elasticsearch Operator sets values sufficient for your environment.

each elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each Pod. Preferably you should allocate as much as possible, up to 64Gi per Pod.

Prerequisites
  • Cluster logging and elasticsearch must be installed.

Procedure
  1. edit the Cluster Logging Custom Resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    ....
    spec:
        logStore:
          type: "elasticsearch"
          elasticsearch:
            resources: (1)
              limits:
                memory: 16Gi
              requests:
                cpu: 500m
                memory: 16Gi
    1 Specify the CPU and memory limits as needed. If you leave these values blank, the elasticsearch Operator sets default values that should be sufficient for most deployments.

    If you adjust the amount of elasticsearch CPU and memory, you must change both the request value and the limit value.

    For example:

          resources:
            limits:
              cpu: "8"
              memory: "32Gi"
            requests:
              cpu: "8"
              memory: "32Gi"

    Kubernetes generally adheres the node CPU configuration and DOeS not allow elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that elasticseach can use the CPU and memory you want, assuming the node has the CPU and memory available.

Configuring elasticsearch replication policy

You can define how elasticsearch shards are replicated across data nodes in the cluster.

Prerequisites
  • Cluster logging and elasticsearch must be installed.

Procedure
  1. edit the Cluster Logging Custom Resource (CR) in the openshift-logging project:

    oc edit clusterlogging instance
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
      logStore:
        type: "elasticsearch"
        elasticsearch:
          redundancyPolicy: "SingleRedundancy" (1)
    1 Specify a redundancy policy for the shards. The change is applied upon saving the changes.
    • FullRedundancy. elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance.

    • MultipleRedundancy. elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance.

    • SingleRedundancy. elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single elasticsearch node.

    • ZeroRedundancy. elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy.

The number of primary shards for the index templates is equal to the number of elasticsearch data nodes.

Configuring elasticsearch storage

elasticsearch requires persistent storage. The faster the storage, the faster the elasticsearch performance.

Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.

Prerequisites
  • Cluster logging and elasticsearch must be installed.

Procedure
  1. edit the Cluster Logging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim.

    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
     spec:
        logStore:
          type: "elasticsearch"
          elasticsearch:
            nodeCount: 3
            storage:
              storageClassName: "gp2"
              size: "200G"

This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage.

Configuring elasticsearch for emptyDir storage

You can use emptyDir with elasticsearch, which creates an ephemeral deployment in which all of a pod’s data is lost upon restart.

When using emptyDir, if elasticsearch is restarted or redeployed, you will lose data.

Prerequisites
  • Cluster logging and elasticsearch must be installed.

Procedure
  1. edit the Cluster Logging CR to specify emptyDir:

     spec:
        logStore:
          type: "elasticsearch"
          elasticsearch:
            nodeCount: 3
            storage: {}

exposing elasticsearch as a route

By default, elasticsearch deployed with cluster logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to elasticsearch for those tools that access its data.

externally, you can access elasticsearch by creating a reencrypt route, your OpenShift Container Platform token and the installed elasticsearch CA certificate. Then, access an elasticsearch node with a cURL request that contains:

Internally, you can access elastiscearch using the elasticsearch cluster IP:

You can get the elasticsearch cluster IP using either of the following commands:

$ oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging

172.30.183.229
oc get service elasticsearch -n openshift-logging

NAMe            TYPe        CLUSTeR-IP       eXTeRNAL-IP   PORT(S)    AGe
elasticsearch   ClusterIP   172.30.183.229   <none>        9200/TCP   22h

$ oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://172.30.183.229:9200/_cat/health"

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    29  100    29    0     0    108      0 --:--:-- --:--:-- --:--:--   108
Prerequisites
  • Cluster logging and elasticsearch must be installed.

  • You must have access to the project in order to be able to access to the logs.

Procedure

To expose elasticsearch externally:

  1. Change to the openshift-logging project:

    $ oc project openshift-logging
  2. extract the CA certificate from elasticsearch and write to the admin-ca file:

    $ oc extract secret/elasticsearch --to=. --keys=admin-ca
    
    admin-ca
  3. Create the route for the elasticsearch service as a YAML file:

    1. Create a YAML file with the following:

      apiVersion: route.openshift.io/v1
      kind: Route
      metadata:
        name: elasticsearch
        namespace: openshift-logging
      spec:
        host:
        to:
          kind: Service
          name: elasticsearch
        tls:
          termination: reencrypt
          destinationCACertificate: | (1)
      1 Add the elasticsearch CA certifcate or use the command in the next step. You do not have to set the spec.tls.key, spec.tls.certificate, and spec.tls.caCertificate parameters required by some reencrypt routes.
    2. Run the following command to add the elasticsearch CA certificate to the route YAML you created:

      cat ./admin-ca | sed -e "s/^/      /" >> <file-name>.yaml
    3. Create the route:

      $ oc create -f <file-name>.yaml
      
      route.route.openshift.io/elasticsearch created
  4. Check that the elasticsearch service is exposed:

    1. Get the token of this ServiceAccount to be used in the request:

      $ token=$(oc whoami -t)
    2. Set the elasticsearch route you created as an environment variable.

      $ routeeS=`oc get route elasticsearch -o jsonpath={.spec.host}`
    3. To verify the route was successfully created, run the following command that accesses elasticsearch through the exposed route:

      curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeeS}/.operations.*/_search?size=1" | jq

      The response appears similar to the following:

        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
      100   944  100   944    0     0     62      0  0:00:15  0:00:15 --:--:--   204
      {
        "took": 441,
        "timed_out": false,
        "_shards": {
          "total": 3,
          "successful": 3,
          "skipped": 0,
          "failed": 0
        },
        "hits": {
          "total": 89157,
          "max_score": 1,
          "hits": [
            {
              "_index": ".operations.2019.03.15",
              "_type": "com.example.viaq.common",
              "_id": "ODdiNWIyYzAtMjg5Ni0TAtNWe3MDY1MjMzNTc3",
              "_score": 1,
              "_source": {
                "_SOURCe_MONOTONIC_TIMeSTAMP": "673396",
                "systemd": {
                  "t": {
                    "BOOT_ID": "246c34ee9cdeecb41a608e94",
                    "MACHINe_ID": "e904a0bb5efd3e36badee0c",
                    "TRANSPORT": "kernel"
                  },
                  "u": {
                    "SYSLOG_FACILITY": "0",
                    "SYSLOG_IDeNTIFIeR": "kernel"
                  }
                },
                "level": "info",
                "message": "acpiphp: Slot [30] registered",
                "hostname": "localhost.localdomain",
                "pipeline_metadata": {
                  "collector": {
                    "ipaddr4": "10.128.2.12",
                    "ipaddr6": "fe80::xx:xxxx:fe4c:5b09",
                    "inputname": "fluent-plugin-systemd",
                    "name": "fluentd",
                    "received_at": "2019-03-15T20:25:06.273017+00:00",
                    "version": "1.3.2 1.6.0"
                  }
                },
                "@timestamp": "2019-03-15T20:00:13.808226+00:00",
                "viaq_msg_id": "ODdiNWIyYzAtMYTAtNWe3MDY1MjMzNTc3"
              }
            }
          ]
        }
      }

About elasticsearch alerting rules

You can view these alerting rules in Prometheus.

Alert Description Severity

elasticsearchClusterNotHealthy

Cluster health status has been ReD for at least 2m. Cluster does not accept writes, shards may be missing or master node hasn’t been elected yet.

critical

elasticsearchClusterNotHealthy

Cluster health status has been YeLLOW for at least 20m. Some shard replicas are not allocated.

warning

elasticsearchBulkRequestsRejectionJumps

High Bulk Rejection Ratio at node in cluster. This node may not be keeping up with the indexing speed.

warning

elasticsearchNodeDiskWatermarkReached

Disk Low Watermark Reached at node in cluster. Shards can not be allocated to this node anymore. You should consider adding more disk space to the node.

alert

elasticsearchNodeDiskWatermarkReached

Disk High Watermark Reached at node in cluster. Some shards will be re-allocated to different nodes if possible. Make sure more disk space is added to the node or drop old indices allocated to this node.

high

elasticsearchJVMHeapUseHigh

JVM Heap usage on the node in cluster is <value>

alert

AggregatedLoggingSystemCPUHigh

System CPU usage on the node in cluster is <value>

alert

elasticsearchProcessCPUHigh

eS process CPU usage on the node in cluster is <value>

alert