This is a cache of https://docs.openshift.com/container-platform/4.3/logging/config/cluster-logging-kibana.html. It is a snapshot of the page at 2024-11-21T01:41:45.923+0000.
Configuring Kibana - Configuring your cluster logging <strong>deployment</strong> | Logging | OpenShift Container Platform 4.3
×

OpenShift Container Platform uses Kibana to display the log data collected by Fluentd and indexed by Elasticsearch.

You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.

You must set cluster logging to Unmanaged state before performing these configurations, unless otherwise noted. For more information, see Changing the cluster logging management state.

Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.

For more information, see Support policy for unmanaged Operators.

Configure Kibana CPU and memory limits

Each component specification allows for adjustments to both the CPU and memory limits.

Procedure
  1. Edit the Cluster Logging Custom Resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
        visualization:
          type: "kibana"
          kibana:
            replicas:
          resources:  (1)
            limits:
              memory: 1Gi
            requests:
              cpu: 500m
              memory: 1Gi
          proxy:  (2)
            resources:
              limits:
                memory: 100Mi
              requests:
                cpu: 100m
                memory: 100Mi
    1 Specify the CPU and memory limits to allocate for each node.
    2 Specify the CPU and memory limits to allocate to the Kibana proxy.

Scaling Kibana for redundancy

You can scale the Kibana deployment for redundancy.

.Procedure
  1. Edit the Cluster Logging Custom Resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    $ oc edit ClusterLogging instance
    
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
        visualization:
          type: "kibana"
          kibana:
            replicas: 1 (1)
    1 Specify the number of Kibana nodes.

Using tolerations to control the Kibana Pod placement

You can control which nodes the Kibana Pods run on and prevent other workloads from using those nodes by using tolerations on the Pods.

You apply tolerations to the Kibana Pods through the Cluster Logging Custom Resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair that instructs the node to repel all Pods that do not tolerate the taint. Using a specific key:value pair that is not on other Pods ensures only the Kibana Pod can run on that node.

Prerequisites
  • Cluster logging and Elasticsearch must be installed.

Procedure
  1. Use the following command to add a taint to a node where you want to schedule the Kibana Pod:

    $ oc adm taint nodes <node-name> <key>=<value>:<effect>

    For example:

    $ oc adm taint nodes node1 kibana=node:NoExecute

    This example places a taint on node1 that has key kibana, value node, and taint effect NoExecute. You must use the NoExecute taint effect. NoExecute schedules only Pods that match the taint and remove existing Pods that do not match.

  2. Edit the visualization section of the Cluster Logging Custom Resource (CR) to configure a toleration for the Kibana Pod:

      visualization:
        type: "kibana"
        kibana:
          tolerations:
          - key: "kibana"  (1)
            operator: "Exists"  (2)
            effect: "NoExecute"  (3)
            tolerationSeconds: 6000 (4)
    1 Specify the key that you added to the node.
    2 Specify the Exists operator to require the key/value/effect parameters to match.
    3 Specify the NoExecute effect.
    4 Optionally, specify the tolerationSeconds parameter to set how long a Pod can remain bound to a node before being evicted.

This toleration matches the taint created by the oc adm taint command. A Pod with this toleration would be able to schedule onto node1.

Installing the Kibana Visualize tool

Kibana’s Visualize tab enables you to create visualizations and dashboards for monitoring container logs, allowing administrator users (cluster-admin or cluster-reader) to view logs by deployment, namespace, pod, and container.

Procedure

To load dashboards and other Kibana UI objects:

  1. If necessary, get the Kibana route, which is created by default upon installation of the Cluster Logging Operator:

    $ oc get routes -n openshift-logging
    
    NAMESPACE                  NAME                       HOST/PORT                                                            PATH     SERVICES                   PORT    TERMINATION          WILDCARD
    openshift-logging          kibana                     kibana-openshift-logging.apps.openshift.com                                   kibana                     <all>   reencrypt/Redirect   None
  2. Get the name of your Elasticsearch pods.

    $ oc get pods -l component=elasticsearch
    
    NAME                                            READY   STATUS    RESTARTS   AGE
    elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k    2/2     Running   0          22h
    elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7    2/2     Running   0          22h
    elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr   2/2     Running   0          22h
  3. Create the necessary per-user configuration that this procedure requires:

    1. Log in to the Kibana dashboard as the user you want to add the dashboards to.

      https://kibana-openshift-logging.apps.openshift.com (1)
      1 Where the URL is Kibana route.
    2. If the Authorize Access page appears, select all permissions and click Allow selected permissions.

    3. Log out of the Kibana dashboard.

  4. Run the following command from the project where the pod is located using the name of any of your Elastiscearch pods:

    $ oc exec <es-pod> -- es_load_kibana_ui_objects <user-name>

    For example:

    $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k -- es_load_kibana_ui_objects <user-name>

The metadata of the Kibana objects such as visualizations, dashboards, and so forth are stored in Elasticsearch with the .kibana.{user_hash} index format. You can obtain the user_hash using the userhash=$(echo -n $username | sha1sum | awk '{print $1}') command. By default, the Kibana shared_ops index mode enables all users with cluster admin roles to share the index, and this Kibana object metadata is saved to the .kibana index.

Any custom dashboard can be imported for a particular user either by using the import/export feature or by inserting the metadata onto the Elasticsearch index using the curl command.