This is a cache of https://docs.openshift.com/container-platform/4.2/logging/cluster-logging-deploying.html. It is a snapshot of the page at 2024-11-21T02:08:55.980+0000.
Deploying cluster <strong>logging</strong> | <strong>logging</strong> | OpenShift Container Platform 4.2
×

You can install cluster logging by deploying the Elasticsearch and Cluster logging Operators. The Elasticsearch Operator creates and manages the Elasticsearch cluster used by cluster logging. The Cluster logging Operator creates and manages the components of the logging stack.

The process for deploying cluster logging to OpenShift Container Platform involves:

Install the Elasticsearch Operator using the CLI

You must install the Elasticsearch Operator using the CLI following the directions below.

Prerequisites

Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume.

Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and limits. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments.

Procedure

To install the Elasticsearch Operator using the CLI:

  1. Create a Namespace for the Elasticsearch Operator.

    1. Create a Namespace object YAML file (for example, eo-namespace.yaml) for the Elasticsearch Operator:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: openshift-operators-redhat (1)
        annotations:
          openshift.io/node-selector: ""
        labels:
          openshift.io/cluster-monitoring: "true" (2)
      1 You must specify the openshift-operators-redhat Namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat Namespace and not the openshift-operators Namespace. The openshift-operators Namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Container Platform metric, which would cause conflicts.
      2 You must specify this label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat Namespace.
    2. Create the Namespace:

      $ oc create -f <file-name>.yaml

      For example:

      $ oc create -f eo-namespace.yaml
  2. Install the Elasticsearch Operator by creating the following objects:

    1. Create an Operator Group object YAML file (for example, eo-og.yaml) for the Elasticsearch operator:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: openshift-operators-redhat
        namespace: openshift-operators-redhat (1)
      spec: {}
      1 You must specify the openshift-operators-redhat Namespace.
    2. Create an Operator Group object:

      $ oc create -f <file-name>.yaml

      For example:

      $ oc create -f eo-og.yaml
    3. Create a Subscription object YAML file (for example, eo-sub.yaml) to subscribe a Namespace to an Operator.

      Example Subscription
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: "elasticsearch-operator"
        namespace: "openshift-operators-redhat" (1)
      spec:
        channel: "4.2" (2)
        installPlanApproval: "Automatic"
        source: "redhat-operators" (3)
        sourceNamespace: "openshift-marketplace"
        name: "elasticsearch-operator"
      1 You must specify the openshift-operators-redhat Namespace.
      2 Specify 4.2 as the channel.
      3 Specify redhat-operators. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM).
    4. Create the Subscription object:

      $ oc create -f <file-name>.yaml

      For example:

      $ oc create -f eo-sub.yaml
    5. Change to the openshift-operators-redhat project:

      $ oc project openshift-operators-redhat
      
      Now using project "openshift-operators-redhat"
    6. Create a Role-based Access Control (RBAC) object file (for example, eo-rbac.yaml) to grant Prometheus permission to access the openshift-operators-redhat Namespace:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        name: prometheus-k8s
        namespace: openshift-operators-redhat
      rules:
      - apiGroups:
        - ""
        resources:
        - services
        - endpoints
        - pods
        verbs:
        - get
        - list
        - watch
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: prometheus-k8s
        namespace: openshift-operators-redhat
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: prometheus-k8s
      subjects:
      - kind: ServiceAccount
        name: prometheus-k8s
        namespace: openshift-operators-redhat
    7. Create the RBAC object:

      $ oc create -f <file-name>.yaml

      For example:

      $ oc create -f eo-rbac.yaml

      The Elasticsearch Operator is installed to the openshift-operators-redhat Namespace and copied to each project in the cluster.

  3. Verify the Operator installation:

    oc get csv --all-namespaces
    
    NAMESPACE                                               NAME                                         DISPLAY                  VERSION               REPLACES   PHASE
    default                                                 elasticsearch-operator.4.2.1-202002032140    Elasticsearch Operator   4.2.1-202002032140               Succeeded
    kube-node-lease                                         elasticsearch-operator.4.2.1-202002032140    Elasticsearch Operator   4.2.1-202002032140               Succeeded
    kube-public                                             elasticsearch-operator.4.2.1-202002032140    Elasticsearch Operator   4.2.1-202002032140               Succeeded
    kube-system                                             elasticsearch-operator.4.2.1-202002032140    Elasticsearch Operator   4.2.1-202002032140               Succeeded
    openshift-apiserver-operator                            elasticsearch-operator.4.2.1-202002032140    Elasticsearch Operator   4.2.1-202002032140               Succeeded
    openshift-apiserver                                     elasticsearch-operator.4.2.1-202002032140    Elasticsearch Operator   4.2.1-202002032140               Succeeded
    openshift-authentication-operator                       elasticsearch-operator.4.2.1-202002032140    Elasticsearch Operator   4.2.1-202002032140               Succeeded
    openshift-authentication                                elasticsearch-operator.4.2.1-202002032140    Elasticsearch Operator   4.2.1-202002032140               Succeeded
    ...

    There should be an Elasticsearch Operator in each Namespace. The version number might be different than shown.

Next step

Install the Cluster logging Operator using the Console or the CLI using the steps in the following sections.

Install the Cluster logging Operator using the web console

You can use the OpenShift Container Platform web console to install the Cluster logging Operator.

You cannot create a Project starting with openshift- using the web console or by using the oc new-project command. You must create a Namespace using a YAML object file and run the oc create -f <file-name>.yaml command, as shown.

Procedure

To install the Cluster logging Operator using the OpenShift Container Platform web console:

  1. Create a Namespace for the Cluster logging Operator. You must use the CLI to create the Namespace.

    1. Create a Namespace object YAML file (for example, clo-namespace.yaml) for the Cluster logging Operator:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: openshift-logging (1)
        annotations:
          openshift.io/node-selector: "" (1)
        labels:
          openshift.io/cluster-logging: "true"
          openshift.io/cluster-monitoring: "true"
      1 Specify these values as shown.
    2. Create the Namespace:

      $ oc create -f <file-name>.yaml

      For example:

      $ oc create -f clo-namespace.yaml
  2. Install the Cluster logging Operator:

    1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.

    2. Choose Cluster logging from the list of available Operators, and click Install.

    3. On the Create Operator Subscription page, under A specific Namespace on the cluster select openshift-logging. Then, click Subscribe.

  3. Verify that the Cluster logging Operator installed:

    1. Switch to the OperatorsInstalled Operators page.

    2. Ensure that Cluster logging is listed in the openshift-logging project with a Status of InstallSucceeded.

      During installation an Operator might display a Failed status. If the Operator then installs with an InstallSucceeded message, you can safely ignore the Failed message.

      If the Operator does not appear as installed, to troubleshoot further:

      • Switch to the OperatorsInstalled Operators page and inspect the Status column for any errors or failures.

      • Switch to the WorkloadsPods page and check the logs in any Pods in the openshift-logging and openshift-operators-redhat projects that are reporting issues.

  4. Create a cluster logging instance:

    1. Switch to the AdministrationCustom Resource Definitions page.

    2. On the Custom Resource Definitions page, click Clusterlogging.

    3. On the Custom Resource Definition Overview page, select View Instances from the Actions menu.

    4. On the Cluster loggings page, click Create Cluster logging.

      You might have to refresh the page to load the data.

    5. In the YAML field, replace the code with the following:

      This default cluster logging configuration should support a wide array of environments. Review the topics on tuning and configuring the cluster logging components for information on modifications you can make to your cluster logging cluster.

      apiVersion: "logging.openshift.io/v1"
      kind: "Clusterlogging"
      metadata:
        name: "instance" (1)
        namespace: "openshift-logging"
      spec:
        managementState: "Managed"  (2)
        logStore:
          type: "elasticsearch"  (3)
          elasticsearch:
            nodeCount: 3 (4)
            storage:
              storageClassName: gp2 (5)
              size: 200G
            redundancyPolicy: "SingleRedundancy"
        visualization:
          type: "kibana"  (6)
          kibana:
            replicas: 1
        curation:
          type: "curator"  (7)
          curator:
            schedule: "30 3 * * *"
        collection:
          logs:
            type: "fluentd"  (8)
            fluentd: {}
      1 The name must be instance.
      2 The cluster logging management state. In most cases, if you change the cluster logging defaults, you must set this to Unmanaged. However, an unmanaged deployment does not receive updates until the cluster logging is placed back into a managed state. For more information, see Changing cluster logging management state.
      3 Settings for configuring Elasticsearch. Using the CR, you can configure shard replication policy and persistent storage. For more information, see Configuring Elasticsearch.
      4 Specify the number of Elasticsearch nodes. See the note that follows this list.
      5 Specify that each Elasticsearch node in the cluster is bound to a Persistent Volume Claim.
      6 Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. For more information, see Configuring Kibana.
      7 Settings for configuring Curator. Using the CR, you can set the Curator schedule. For more information, see Configuring Curator.
      8 Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. For more information, see Configuring Fluentd.

      The maximum number of Elasticsearch master nodes is three. If you specify a nodeCount greater than 3, OpenShift Container Platform creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded.

      For example, if nodeCount=4, the following nodes are created:

      $ oc get deployment
      
      cluster-logging-operator       1/1     1            1           18h
      elasticsearch-cd-x6kdekli-1    0/1     1            0           6m54s
      elasticsearch-cdm-x6kdekli-1   1/1     1            1           18h
      elasticsearch-cdm-x6kdekli-2   0/1     1            0           6m49s
      elasticsearch-cdm-x6kdekli-3   0/1     1            0           6m44s

      The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.

    6. Click Create. This creates the Cluster logging Custom Resource and Elasticsearch Custom Resource, which you can edit to make changes to your cluster logging cluster.

  5. Verify the install:

    1. Switch to the WorkloadsPods page.

    2. Select the openshift-logging project.

      You should see several Pods for cluster logging, Elasticsearch, Fluentd, and Kibana similar to the following list:

      • cluster-logging-operator-cb795f8dc-xkckc

      • elasticsearch-cdm-b3nqzchd-1-5c6797-67kfz

      • elasticsearch-cdm-b3nqzchd-2-6657f4-wtprv

      • elasticsearch-cdm-b3nqzchd-3-588c65-clg7g

      • fluentd-2c7dg

      • fluentd-9z7kk

      • fluentd-br7r2

      • fluentd-fn2sb

      • fluentd-pb2f8

      • fluentd-zqgqx

      • kibana-7fb4fd4cc9-bvt4p

Install the Cluster logging Operator using the CLI

You can use the OpenShift Container Platform CLI to install the Cluster logging Operator. The Cluster logging Operator creates and manages the components of the logging stack.

Procedure

To install the Cluster logging Operator using the CLI:

  1. Create a Namespace for the Cluster logging Operator:

    1. Create a Namespace object YAML file (for example, clo-namespace.yaml) for the Cluster logging Operator:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: openshift-logging
        annotations:
          openshift.io/node-selector: ""
        labels:
          openshift.io/cluster-logging: "true"
          openshift.io/cluster-monitoring: "true"
    2. Create the Namespace:

      $ oc create -f <file-name>.yaml

      For example:

      $ oc create -f clo-namespace.yaml
  2. Install the Cluster logging Operator by creating the following objects:

    1. Create an OperatorGroup object YAML file (for example, clo-og.yaml) for the Cluster logging Operator:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: cluster-logging
        namespace: openshift-logging (1)
      spec:
        targetNamespaces:
        - openshift-logging (1)
      1 You must specify the openshift-logging namespace.
    2. Create the OperatorGroup object:

      $ oc create -f <file-name>.yaml

      For example:

      $ oc create -f clo-og.yaml
    3. Create a Subscription object YAML file (for example, clo-sub.yaml) to subscribe a Namespace to an Operator.

      Example Subscription
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: cluster-logging
        namespace: openshift-logging (1)
      spec:
        channel: "4.2" (2)
        name: cluster-logging
        source: redhat-operators (3)
        sourceNamespace: openshift-marketplace
      1 You must specify the openshift-logging Namespace.
      2 Specify 4.2 as the channel.
      3 Specify redhat-operators. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
    4. Create the Subscription object:

      $ oc create -f <file-name>.yaml

      For example:

      $ oc create -f clo-sub.yaml

      The Cluster logging Operator is installed to the openshift-logging Namespace.

  3. Verify the Operator installation.

    There should be a Cluster logging Operator in the openshift-logging Namespace. The Version number might be different than shown.

    oc get csv --all-namespaces
    
    NAMESPACE                                               NAME                                         DISPLAY                  VERSION               REPLACES   PHASE
    ...
    openshift-logging                                       clusterlogging.4.2.1-202002032140            Cluster logging          4.2.1-202002032140               Succeeded
    ...
  4. Create a Cluster logging instance:

    1. Create an instance object YAML file (for example, clo-instance.yaml) for the Cluster logging Operator:

      This default Cluster logging configuration should support a wide array of environments. Review the topics on tuning and configuring the Cluster logging components for information on modifications you can make to your Cluster logging cluster.

      apiVersion: "logging.openshift.io/v1"
      kind: "Clusterlogging"
      metadata:
        name: "instance" (1)
        namespace: "openshift-logging"
      spec:
        managementState: "Managed"  (2)
        logStore:
          type: "elasticsearch"  (3)
          elasticsearch:
            nodeCount: 3 (4)
            storage:
              storageClassName: gp2 (5)
              size: 200G
            redundancyPolicy: "SingleRedundancy"
        visualization:
          type: "kibana"  (6)
          kibana:
            replicas: 1
        curation:
          type: "curator"  (7)
          curator:
            schedule: "30 3 * * *"
        collection:
          logs:
            type: "fluentd"  (8)
            fluentd: {}
      1 The name must be instance.
      2 The Cluster logging management state. In most cases, if you change the Cluster logging defaults, you must set this to Unmanaged. However, an unmanaged deployment does not receive updates until Cluster logging is placed back into the Managed state. For more information, see Changing cluster logging management state.
      3 Settings for configuring Elasticsearch. Using the Custom Resource (CR), you can configure shard replication policy and persistent storage. For more information, see Configuring Elasticsearch.
      4 Specify the number of Elasticsearch nodes. See the note that follows this list.
      5 Specify that each Elasticsearch node in the cluster is bound to a Persistent Volume Claim.
      6 Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. For more information, see Configuring Kibana.
      7 Settings for configuring Curator. Using the CR, you can set the Curator schedule. For more information, see Configuring Curator.
      8 Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. For more information, see Configuring Fluentd.

      The maximum number of Elasticsearch master nodes is three. If you specify a nodeCount greater than 3, OpenShift Container Platform creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded.

      For example, if nodeCount=4, the following nodes are created:

      $ oc get deployment
      
      cluster-logging-operator       1/1     1            1           18h
      elasticsearch-cd-x6kdekli-1    1/1     1            0           6m54s
      elasticsearch-cdm-x6kdekli-1   1/1     1            1           18h
      elasticsearch-cdm-x6kdekli-2   1/1     1            0           6m49s
      elasticsearch-cdm-x6kdekli-3   1/1     1            0           6m44s

      The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.

    2. Create the instance:

      $ oc create -f <file-name>.yaml

      For example:

      $ oc create -f clo-instance.yaml
  5. Verify the install by listing the Pods in the openshift-logging project.

    You should see several Pods for Cluster logging, Elasticsearch, Fluentd, and Kibana similar to the following list:

    oc get pods -n openshift-logging
    
    NAME                                            READY   STATUS    RESTARTS   AGE
    cluster-logging-operator-66f77ffccb-ppzbg       1/1     Running   0          7m
    elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp    2/2     Running   0          2m40s
    elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc   2/2     Running   0          2m36s
    elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2   2/2     Running   0          2m4s
    fluentd-587vb                                   1/1     Running   0          2m26s
    fluentd-7mpb9                                   1/1     Running   0          2m30s
    fluentd-flm6j                                   1/1     Running   0          2m33s
    fluentd-gn4rn                                   1/1     Running   0          2m26s
    fluentd-nlgb6                                   1/1     Running   0          2m30s
    fluentd-snpkt                                   1/1     Running   0          2m28s
    kibana-d6d5668c5-rppqm                          2/2     Running   0          2m39s

Additional resources

For more information on installing Operators,see Installing Operators from the OperatorHub.