$ oc -n openshift-logging edit Clusterlogging instance
OKD uses Kibana to display the log data collected by OpenShift logging.
You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.
The OpenShift logging components allow for adjustments to both the CPU and memory limits.
Edit the Clusterlogging
custom resource (CR) in the openshift-logging
project:
$ oc -n openshift-logging edit Clusterlogging instance
apiVersion: "logging.openshift.io/v1"
kind: "Clusterlogging"
metadata:
name: "instance"
namespace: openshift-logging
...
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
elasticsearch:
nodeCount: 3
resources: (1)
limits:
memory: 16Gi
requests:
cpu: 200m
memory: 16Gi
storage:
storageClassName: "gp2"
size: "200G"
redundancyPolicy: "SingleRedundancy"
visualization:
type: "kibana"
kibana:
resources: (2)
limits:
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
proxy:
resources: (2)
limits:
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
replicas: 2
collection:
logs:
type: "fluentd"
fluentd:
resources: (3)
limits:
memory: 736Mi
requests:
cpu: 200m
memory: 736Mi
1 | Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. |
2 | Specify the CPU and memory limits and requests for the log visualizer as needed. |
3 | Specify the CPU and memory limits and requests for the log collector as needed. |
You can scale the pod that hosts the log visualizer for redundancy.
Edit the Clusterlogging
custom resource (CR) in the openshift-logging
project:
$ oc edit Clusterlogging instance
$ oc edit Clusterlogging instance
apiVersion: "logging.openshift.io/v1"
kind: "Clusterlogging"
metadata:
name: "instance"
....
spec:
visualization:
type: "kibana"
kibana:
replicas: 1 (1)
1 | Specify the number of Kibana nodes. |