This is a cache of https://docs.openshift.com/container-platform/4.5/logging/config/cluster-logging-visualizer.html. It is a snapshot of the page at 2024-11-21T00:07:46.036+0000.
Configuring the log visualizer - Configuring your cluster logging <strong>deployment</strong> | Logging | OpenShift Container Platform 4.5
×

OpenShift Container Platform uses Kibana to display the log data collected by cluster logging.

You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.

Configuring CPU and memory limits

The cluster logging components allow for adjustments to both the CPU and memory limits.

Procedure
  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance -n openshift-logging
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
      managementState: "Managed"
      logStore:
        type: "elasticsearch"
        elasticsearch:
          nodeCount: 2
          resources: (1)
            limits:
              memory: 2Gi
            requests:
              cpu: 200m
              memory: 2Gi
          storage:
            storageClassName: "gp2"
            size: "200G"
          redundancyPolicy: "SingleRedundancy"
      visualization:
        type: "kibana"
        kibana:
          resources: (2)
            limits:
              memory: 1Gi
            requests:
              cpu: 500m
              memory: 1Gi
          proxy:
            resources: (2)
              limits:
                memory: 100Mi
              requests:
                cpu: 100m
                memory: 100Mi
          replicas: 2
      curation:
        type: "curator"
        curator:
          resources: (3)
            limits:
              memory: 200Mi
            requests:
              cpu: 200m
              memory: 200Mi
          schedule: "*/10 * * * *"
      collection:
        logs:
          type: "fluentd"
          fluentd:
            resources: (4)
              limits:
                memory: 736Mi
              requests:
                cpu: 200m
                memory: 736Mi
    1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value.
    2 Specify the CPU and memory limits and requests for the log visualizer as needed.
    3 Specify the CPU and memory limits and requests for the log curator as needed.
    4 Specify the CPU and memory limits and requests for the log collector as needed.

Scaling redundancy for the log visualizer nodes

You can scale the pod that hosts the log visualizer for redundancy.

Procedure
  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    $ oc edit ClusterLogging instance
    
    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      name: "instance"
    
    ....
    
    spec:
        visualization:
          type: "kibana"
          kibana:
            replicas: 1 (1)
    1 Specify the number of Kibana nodes.

Using tolerations to control the log visualizer pod placement

You can control the node where the log visualizer pod runs and prevent other workloads from using those nodes by using tolerations on the pods.

You apply tolerations to the log visualizer pod through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value pair that is not on other pods ensures only the Kibana pod can run on that node.

Prerequisites
  • Cluster logging and Elasticsearch must be installed.

Procedure
  1. Use the following command to add a taint to a node where you want to schedule the log visualizer pod:

    $ oc adm taint nodes <node-name> <key>=<value>:<effect>

    For example:

    $ oc adm taint nodes node1 kibana=node:NoExecute

    This example places a taint on node1 that has key kibana, value node, and taint effect NoExecute. You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and remove existing pods that do not match.

  2. Edit the visualization section of the ClusterLogging CR to configure a toleration for the Kibana pod:

      visualization:
        type: "kibana"
        kibana:
          tolerations:
          - key: "kibana"  (1)
            operator: "Exists"  (2)
            effect: "NoExecute"  (3)
            tolerationSeconds: 6000 (4)
    1 Specify the key that you added to the node.
    2 Specify the Exists operator to require the key/value/effect parameters to match.
    3 Specify the NoExecute effect.
    4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted.

This toleration matches the taint created by the oc adm taint command. A pod with this toleration would be able to schedule onto node1.