OpenShift Container Platform cluster logging is designed to be used with the default configuration, which is tuned for small to medium sized OpenShift Container Platform clusters.
The installation instructions that follow include a sample cluster Logging Custom Resource (CR), which you can use to create a cluster logging instance
and configure your cluster logging deployment.
If you want to use the default cluster logging install, you can use the sample CR directly.
If you want to customize your deployment, make changes to the sample CR as needed. The following describes the configurations you can make when installing your cluster logging instance or modify after installtion. See the Configuring sections for more information on working with each component, including modifications you can make outside of the cluster Logging Custom Resource.
Configuring and Tuning cluster Logging
You can configure your cluster logging environment by modifying the cluster Logging Custom Resource deployed
in the openshift-logging
project.
You can modify any of the following components upon install or after install:
- Memory and CPU
-
You can adjust both the CPU and memory limits for each component by modifying the resources
block with valid memory and CPU values:
spec:
logStore:
elasticsearch:
resources:
limits:
cpu:
memory:
requests:
cpu: 1
memory: 16Gi
type: "elasticsearch"
collection:
logs:
fluentd:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
type: "fluentd"
visualization:
kibana:
resources:
limits:
cpu:
memory:
requests:
cpu:
memory:
type: kibana
curation:
curator:
resources:
limits:
memory: 200Mi
requests:
cpu: 200m
memory: 200Mi
type: "curator"
- Elasticsearch storage
-
You can configure a persistent storage class and size for the Elasticsearch cluster using the storageClass
name
and size
parameters. The cluster Logging Operator creates a PersistentVolumeClaim
for each data node in the Elasticsearch cluster based on these parameters.
spec:
logStore:
type: "elasticsearch"
elasticsearch:
storage:
storageClassName: "gp2"
size: "200G"
This example specifies each data node in the cluster will be bound to a PersistentVolumeClaim
that
requests "200G" of "gp2" storage. Each primary shard will be backed by a single replica.
|
Omitting the storage block results in a deployment that includes ephemeral storage only.
spec:
logStore:
type: "elasticsearch"
elasticsearch:
storage: {}
|
- Elasticsearch replication policy
-
You can set the policy that defines how Elasticsearch shards are replicated across data nodes in the cluster:
-
FullRedundancy
. The shards for each index are fully replicated to every data node.
-
MultipleRedundancy
. The shards for each index are spread over half of the data nodes.
-
SingleRedundancy
. A single copy of each shard. Logs are always available and recoverable as long as at least two data nodes exist.
-
ZeroRedundancy
. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails.
spec:
curation:
type: "curator"
resources:
curator:
schedule: "30 3 * * *"
Sample modified cluster Logging Custom Resource
The following is an example of a cluster Logging Custom Resource modified using the options previously described.
Sample modified cluster Logging Custom Resource
apiVersion: "logging.openshift.io/v1"
kind: "clusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
elasticsearch:
nodeCount: 2
resources:
limits:
memory: 2Gi
requests:
cpu: 200m
memory: 2Gi
storage: {}
redundancyPolicy: "SingleRedundancy"
visualization:
type: "kibana"
kibana:
resources:
limits:
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
replicas: 1
curation:
type: "curator"
curator:
resources:
limits:
memory: 200Mi
requests:
cpu: 200m
memory: 200Mi
schedule: "*/5 * * * *"
collection:
logs:
type: "fluentd"
fluentd:
resources:
limits:
memory: 1Gi
requests:
cpu: 200m
memory: 1Gi