This is a cache of https://docs.openshift.com/container-platform/4.6/logging/cluster-logging-upgrading.html. It is a snapshot of the page at 2024-11-22T23:27:43.749+0000.
Updating cluster <strong>logging</strong> | <strong>logging</strong> | OpenShift Container Platform 4.6
×

After updating the OpenShift Container Platform cluster from 4.4 to 4.5, you can then update the OpenShift Elasticsearch Operator and Cluster logging Operator from 4.4 to 4.5.

Cluster logging 4.5 introduces a new Elasticsearch version, Elasticsearch 6.8.1, and an enhanced security plug-in, Open Distro for Elasticsearch. The new Elasticsearch version introduces a new Elasticsearch data model, where the Elasticsearch data is indexed only by type: infrastructure, application, and audit. Previously, data was indexed by type (infrastructure and application) and project.

Because of the new data model, the update does not migrate existing custom Kibana index patterns and visualizations into the new version. You must re-create your Kibana index patterns and visualizations to match the new indices after updating.

Due to the nature of these changes, you are not required to update your cluster logging to 4.5. However, when you update to OpenShift Container Platform 4.6, you must update cluster logging to 4.6 at that time.

Updating cluster logging

After updating the OpenShift Container Platform cluster, you can update cluster logging from 4.5 to 4.6 by changing the subscription for the OpenShift Elasticsearch Operator and the Cluster logging Operator.

When you update:

  • You must update the OpenShift Elasticsearch Operator before updating the Cluster logging Operator.

  • You must update both the OpenShift Elasticsearch Operator and the Cluster logging Operator.

    Kibana is unusable when the OpenShift Elasticsearch Operator has been updated but the Cluster logging Operator has not been updated.

    If you update the Cluster logging Operator before the OpenShift Elasticsearch Operator, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, delete the Cluster logging Operator pod. When the Cluster logging Operator pod redeploys, the Kibana CR is created.

If your cluster logging version is prior to 4.5, you must upgrade cluster logging to 4.5 before updating to 4.6.

Prerequisites
  • Update the OpenShift Container Platform cluster from 4.5 to 4.6.

  • Make sure the cluster logging status is healthy:

    • All pods are ready.

    • The Elasticsearch cluster is healthy.

  • Back up your Elasticsearch and Kibana data.

Procedure
  1. Update the OpenShift Elasticsearch Operator:

    1. From the web console, click OperatorsInstalled Operators.

    2. Select the openshift-operators-redhat project.

    3. Click the OpenShift Elasticsearch Operator.

    4. Click SubscriptionChannel.

    5. In the Change Subscription Update Channel window, select 4.6 and click Save.

    6. Wait for a few seconds, then click OperatorsInstalled Operators.

      The OpenShift Elasticsearch Operator is shown as 4.6. For example:

      OpenShift Elasticsearch Operator
      4.6.0-202007012112.p0 provided
      by Red Hat, Inc

      Wait for the Status field to report Succeeded.

  2. Update the Cluster logging Operator:

    1. From the web console, click OperatorsInstalled Operators.

    2. Select the openshift-logging project.

    3. Click the Cluster logging Operator.

    4. Click SubscriptionChannel.

    5. In the Change Subscription Update Channel window, select 4.6 and click Save.

    6. Wait for a few seconds, then click OperatorsInstalled Operators.

      The Cluster logging Operator is shown as 4.6. For example:

      Cluster logging
      4.6.0-202007012112.p0 provided
      by Red Hat, Inc

      Wait for the Status field to report Succeeded.

  3. Check the logging components:

    1. Ensure that all Elasticsearch pods are in the Ready status:

      $ oc get pod -n openshift-logging --selector component=elasticsearch
      Example output
      NAME                                            READY   STATUS    RESTARTS   AGE
      elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk   2/2     Running   0          31m
      elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk   2/2     Running   0          30m
      elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc     2/2     Running   0          29m
    2. Ensure that the Elasticsearch cluster is healthy:

      $ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_health
      {
        "cluster_name" : "elasticsearch",
        "status" : "green",
      }
      ...
    3. Ensure that the Elasticsearch cron jobs are created:

      $ oc project openshift-logging
      $ oc get cronjob
      NAME                     SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
      curator                  30 3,9,15,21 * * * False 0        <none>          20s
      elasticsearch-im-app     */15 * * * *   False     0        <none>          56s
      elasticsearch-im-audit   */15 * * * *   False     0        <none>          56s
      elasticsearch-im-infra   */15 * * * *   False     0        <none>          56s
    4. Verify that the log store is updated to 4.6 and the indices are green:

      $ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices

      Verify that the output includes the app-00000x, infra-00000x, audit-00000x, .security indices.

      Sample output with indices in a green status
      Tue Jun 30 14:30:54 UTC 2020
      health status index                                                                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
      green  open   infra-000008                                                          bnBvUFEXTWi92z3zWAzieQ   3 1       222195            0        289            144
      green  open   infra-000004                                                          rtDSzoqsSl6saisSK7Au1Q   3 1       226717            0        297            148
      green  open   infra-000012                                                          RSf_kUwDSR2xEuKRZMPqZQ   3 1       227623            0        295            147
      green  open   .kibana_7                                                             1SJdCqlZTPWlIAaOUd78yg   1 1            4            0          0              0
      green  open   infra-000010                                                          iXwL3bnqTuGEABbUDa6OVw   3 1       248368            0        317            158
      green  open   infra-000009                                                          YN9EsULWSNaxWeeNvOs0RA   3 1       258799            0        337            168
      green  open   infra-000014                                                          YP0U6R7FQ_GVQVQZ6Yh9Ig   3 1       223788            0        292            146
      green  open   infra-000015                                                          JRBbAbEmSMqK5X40df9HbQ   3 1       224371            0        291            145
      green  open   .orphaned.2020.06.30                                                  n_xQC2dWQzConkvQqei3YA   3 1            9            0          0              0
      green  open   infra-000007                                                          llkkAVSzSOmosWTSAJM_hg   3 1       228584            0        296            148
      green  open   infra-000005                                                          d9BoGQdiQASsS3BBFm2iRA   3 1       227987            0        297            148
      green  open   infra-000003                                                          1-goREK1QUKlQPAIVkWVaQ   3 1       226719            0        295            147
      green  open   .security                                                             zeT65uOuRTKZMjg_bbUc1g   1 1            5            0          0              0
      green  open   .kibana-377444158_kubeadmin                                           wvMhDwJkR-mRZQO84K0gUQ   3 1            1            0          0              0
      green  open   infra-000006                                                          5H-KBSXGQKiO7hdapDE23g   3 1       226676            0        295            147
      green  open   infra-000001                                                          eH53BQ-bSxSWR5xYZB6lVg   3 1       341800            0        443            220
      green  open   .kibana-6                                                             RVp7TemSSemGJcsSUmuf3A   1 1            4            0          0              0
      green  open   infra-000011                                                          J7XWBauWSTe0jnzX02fU6A   3 1       226100            0        293            146
      green  open   app-000001                                                            axSAFfONQDmKwatkjPXdtw   3 1       103186            0        126             57
      green  open   infra-000016                                                          m9c1iRLtStWSF1GopaRyCg   3 1        13685            0         19              9
      green  open   infra-000002                                                          Hz6WvINtTvKcQzw-ewmbYg   3 1       228994            0        296            148
      green  open   infra-000013                                                          KR9mMFUpQl-jraYtanyIGw   3 1       228166            0        298            148
      green  open   audit-000001                                                          eERqLdLmQOiQDFES1LBATQ   3 1            0            0          0              0
    5. Verify that the log collector is updated to 4.6:

      $ oc get ds fluentd -o json | grep fluentd-init

      Verify that the output includes a fluentd-init container:

      "containerName": "fluentd-init"
    6. Verify that the log visualizer is updated to 4.6 using the Kibana CRD:

      $ oc get kibana kibana -o json

      Verify that the output includes a Kibana pod with the ready status:

      Sample output with a ready Kibana pod
      [
      {
      "clusterCondition": {
      "kibana-5fdd766ffd-nb2jj": [
      {
      "lastTransitionTime": "2020-06-30T14:11:07Z",
      "reason": "ContainerCreating",
      "status": "True",
      "type": ""
      },
      {
      "lastTransitionTime": "2020-06-30T14:11:07Z",
      "reason": "ContainerCreating",
      "status": "True",
      "type": ""
      }
      ]
      },
      "deployment": "kibana",
      "pods": {
      "failed": [],
      "notReady": []
      "ready": []
      },
      "replicaSets": [
      "kibana-5fdd766ffd"
      ],
      "replicas": 1
      }
      ]
    7. Verify the Curator is updated to 4.6:

      $ oc get cronjob -o name
      cronjob.batch/curator
      cronjob.batch/elasticsearch-im-app
      cronjob.batch/elasticsearch-im-audit
      cronjob.batch/elasticsearch-im-infra

      Verify that the output includes the elasticsearch-im-* indices.

Post-update tasks

If you use the Log Forwarding API to forward logs, after the OpenShift Elasticsearch Operator and Cluster logging Operator are fully updated to 4.6, you must replace your LogForwarding custom resource (CR) with a ClusterLogForwarder CR.

Updating log forwarding custom resources

The OpenShift Container Platform Log Forward API has been promoted from Technology Preview to Generally Available in OpenShift Container Platform 4.6. The GA release contains some improvements and enhancements that require you to make a change to your Clusterlogging custom resource (CR) and to replace your LogForwarding custom resource (CR) with a ClusterLogForwarder CR.

Sample ClusterLogForwarder instance in OpenShift Container Platform 4.6
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
....
spec:
  outputs:
  - url: http://remote.elasticsearch.com:9200
    name: elasticsearch
    type: elasticsearch
  - url: tls://fluentdserver.example.com:24224
    name: fluentd
    type: fluentdForward
    secret:
      name: fluentdserver
  pipelines:
  - inputRefs:
      - infrastructure
      - application
    name: mylogs
    outputRefs:
     - elasticsearch
  - inputRefs:
      - audit
    name: auditlogs
    outputRefs:
      - fluentd
      - default
...
Sample ClusterLogForwarder CR in OpenShift Container Platform 4.5
apiVersion: logging.openshift.io/v1alpha1
kind: LogForwarding
metadata:
  name: instance
  namespace: openshift-logging
spec:
  disableDefaultForwarding: true
  outputs:
   - name: elasticsearch
     type: elasticsearch
     endpoint: remote.elasticsearch.com:9200
   - name: fluentd
     type: forward
     endpoint: fluentdserver.example.com:24224
     secret:
       name: fluentdserver
  pipelines:
   - inputSource: logs.infra
     name: infra-logs
     outputRefs:
     - elasticearch
   - inputSource: logs.app
     name: app-logs
     outputRefs:
      -  elasticearch
   - inputSource: logs.audit
     name: audit-logs
     outputRefs:
      -  fluentd

The following procedure shows each parameter you must change.

Procedure

To update the ClusterLogForwarder CR in 4.5 to the ClusterLogForwarding CR for 4.6, make the following modifications:

  1. Edit the Clusterlogging custom resource (CR) to remove the logforwardingtechpreview annotation:

    Sample Clusterlogging CR
    apiVersion: "logging.openshift.io/v1"
    kind: "Clusterlogging"
    metadata:
      annotations:
        clusterlogging.openshift.io/logforwardingtechpreview: enabled (1)
      name: "instance"
      namespace: "openshift-logging"
    ....
    1 Remove the logforwardingtechpreview annotation.
  2. Export the ClusterLogForwarder CR to create a YAML file for the ClusterLogForwarder instance:

    $ oc get LogForwarding instance -n openshift-logging -o yaml| tee ClusterLogForwarder.yaml
  3. Edit the YAML file to make the following modifications:

    Sample ClusterLogForwarder instance in OpenShift Container Platform 4.6
    apiVersion: logging.openshift.io/v1 (1)
    kind: ClusterLogForwarder (2)
    metadata:
      name: instance
      namespace: openshift-logging
    ....
    spec: (3)
      outputs:
      - url: http://remote.elasticsearch.com:9200 (4)
        name: elasticsearch
        type: elasticsearch
      - url: tls://fluentdserver.example.com:24224
        name: fluentd
        type: fluentdForward (5)
        secret:
          name: fluentdserver
      pipelines:
      - inputRefs: (6)
          - infrastructure
          - application
        name: mylogs
        outputRefs:
         - elasticsearch
      - inputRefs:
          - audit
        name: auditlogs
        outputRefs:
          - fluentd
          - default (7)
    ...
    1 Change the apiVersion from "logging.openshift.io/v1alpha1" to "logging.openshift.io/v1".
    2 Change the object kind from kind: "LogForwarding" to kind: "ClusterLogForwarder".
    3 Remove the disableDefaultForwarding: true parameter.
    4 Change the output parameter from spec.outputs.endpoint to spec.outputs.url. Add a prefix to the URL, such as https://, tcp://, and so forth, if a prefix is not present.
    5 For Fluentd outputs, change the type from forward to fluentdForward.
    6 Change the pipelines:
    • Change spec.pipelines.inputSource to spec.pipelines.inputRefs

    • Change logs.infra to infrastructure

    • Change logs.app to application

    • Change logs.audit to audit

    7 Optional: Add a default pipeline to send logs to the internal Elasticsearch instance. You are not required to configure a default output.

    If you want to forward logs to only the internal OpenShift Container Platform Elasticsearch instance, do not configure the Log Forwarding API.

  4. Create the CR object:

    $ oc create -f ClusterLogForwarder.yaml

For information on the new capabilities of the Log Forwarding API, see Forwarding logs to third party systems.