This is a cache of https://docs.openshift.com/container-platform/4.3/logging/cluster-logging-upgrading.html. It is a snapshot of the page at 2024-11-23T02:09:58.599+0000.
Updating cluster <strong>logging</strong> | <strong>logging</strong> | OpenShift Container Platform 4.3
×

Prerequisites

  • Update the OpenShift Container Platform cluster from 4.4 to 4.6.

  • Ensure the cluster logging status is healthy:

    • All pods are ready.

    • The Elasticsearch cluster is healthy.

  • Back up your Elasticsearch and Kibana data.

Updating cluster logging from 4.4

When you update the OpenShift Container Platform cluster, you can update cluster logging from 4.4 to 4.6 by changing the subscription for the Cluster logging Operator and the Elasticsearch Operator.

When your cluster logging version is 4.4, you must update cluster logging as follows due to changes in the way that OLM provides operator packages.

Additionally, it is recommended that you update Cluster logging Operator before you update Elasticsearch Operator to avoid the need to restart each Elasticsearch pod manually, one by one.

Perform each update, version by version, in the following order:

  1. Update OpenShift Cluster to 4.5.

  2. Update Cluster logging Operator to the latest version of 4.4, then update to 4.5.

  3. Update Elasticsearch Operator to the latest version of 4.4, then update to 4.5.

  4. Update OpenShift Cluster to 4.6.

  5. Update Cluster logging Operator to latest version of 4.5, then update to 4.6.

  6. Update Elasticsearch Operator to latest version of 4.5, then update to 4.6

If you have already updated to Openshift 4.6 with the cluster logging still on version 4.4, be aware that there is not any cluster logging 4.5 metadata in the redhat-operator:v4.6 index image and it is not possible to update logging version by version. Instead, you must update Cluster logging Operator and Elasticsearch Operator from 4.4 to 4.6 directly using one of the following options. You can run into the following known issues, depending on which option you choose.

  • Update Cluster logging Operator from 4.4 to 4.6 first, then update Elasticsearch Operator from 4.4 to 4.6.

    • Known issue: fluentd will be in Init:CrashLoopBackOff before Elasticsearch Operator is updated to 4.6.

    • Solution: After Elasticsearch Operator is updated to 4.6, the issue is self-healing.

  • Update Elasticsearch Operator from 4.4 to 4.6, then update Cluster logging Operator from 4.4. to 4.6.

    • Known issue: fluentd can fail to connect to Elasticsearch due to a certificate issue.

    • Solution: Restart the Elasticsearch pod.

Procedure
  1. Update the Cluster logging Operator:

    1. From the OpenShift web console, click OperatorsInstalled Operators.

    2. Select the openshift-logging project.

    3. Click Cluster logging.

    4. Click SubscriptionChannel.

    5. In the Change Subscription Update Channel window, select the version that is one version newer than your current version.

    6. On the YAML tab, set the retention policy in the clusterlogging custom resource (CR) as follows. You must specify the length of time that Elasticsearch should retain each log source. Enter an integer for the number of days, for example, 7d for seven days. The application maxAge can be less than or equal to 7d. The infra maxAge and audit maxAge must be 1h, and you must explicitly set this value to override the default. Logs older than the maxAge are deleted. You must specify a retention policy for each log source or the Elasticsearch indices will not be created for that source.

      kind: "Clusterlogging"
      metadata:
        name: "instance"
        namespace: "openshift-logging"
      spec:
        managementState: "Managed"
        logStore:
          type: "elasticsearch"
          retentionPolicy:
            application:
              maxAge: 7d
            infra:
              maxAge: 1h
            audit:
              maxAge: 1h
      ...
    7. Click Save.

    8. Wait for a few seconds, then click OperatorsInstalled Operators.

      The Cluster logging Operator is shown as 4.6. For example:

      Cluster logging
      4.6.0-202007012112.p0 provided
      by Red Hat, Inc

      Wait for the Status field to report Succeeded.

  2. Update the Elasticsearch Operator:

    1. From the OpenShift web console, click OperatorsInstalled Operators.

    2. Select the openshift-operators-redhat project.

    3. If the Elasticsearch Operator is installed on the openshift-logging namespace, uninstall the Elasticsearch Operator.

      This uninstall action does not delete the existing data, volumes, and so on.

    4. Click OperatorsOperatorHub. Search for and select OpenShift Elasticsearch Operator.

    5. Click Install + The Install operator page opens.

    6. For Update Channel, select the version that is one version newer than your current version.

    7. For Installed Namespace, select openshift-operators-redhat namespace.

    8. Click Install.

    9. Wait for a few seconds, then click OperatorsInstalled Operators.

      The Elasticsearch Operator is shown as 4.6. For example:

      Elasticsearch Operator
      4.6.0-202007012112.p0 provided
      by Red Hat, Inc

      Wait for the Status field to report Succeeded.

  3. Check the logging components:

    1. Ensure that all Elasticsearch pods are in the Ready status:

      $ oc get pod -n openshift-logging --selector component=elasticsearch
      Example output
      NAME                                            READY   STATUS    RESTARTS   AGE
      elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk   2/2     Running   0          31m
      elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk   2/2     Running   0          30m
      elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc     2/2     Running   0          29m
    2. Ensure that the Elasticsearch cluster is healthy:

      $ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_health
      {
        "cluster_name" : "elasticsearch",
        "status" : "green",
      }
      ....
      
    3. Ensure that the Elasticsearch cron jobs are created:

      $ oc project openshift-logging
      $ oc get cron job
      Example output
      NAME                           SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
      curator                        */10 * * * *   False     0                        109s
      elasticsearch-im-app           */15 * * * *   False     1 19s                    107s
      elasticsearch-im-audit         */15 * * * *   False     1 19s                    107s
      elasticsearch-im-infra         */15 * * * *   False     1 19s                    107s
    4. Verify that the log store is updated to 4.6 and the indices are green:

      $ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices

      You should see the app-0000x, infra-0000x, audit-0000x, .security indices.

      It takes approximately 15 minutes for the indices to be updated.

      Example output with indices in a green status
      Tue Jun 30 14:30:54 UTC 2020
      health status index                                                                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
      green  open   infra-000008                                                          bnBvUFEXTWi92z3zWAzieQ   3 1       222195            0        289            144
      green  open   infra-000004                                                          rtDSzoqsSl6saisSK7Au1Q   3 1       226717            0        297            148
      green  open   infra-000012                                                          RSf_kUwDSR2xEuKRZMPqZQ   3 1       227623            0        295            147
      green  open   .kibana_7                                                             1SJdCqlZTPWlIAaOUd78yg   1 1            4            0          0              0
      green  open   .operations.2020.06.30                                                aOHMYOa3S_69NJFh2t3yrQ   3 1      4206118            0       8998           4499
      green  open   project.local-storage.d5c8a3d6-30a3-4512-96df-67c537540072.2020.06.30 O_Uldg2wS5K_L6FyqWxOZg   3 1        91052            0        135             67
      green  open   infra-000010                                                          iXwL3bnqTuGEABbUDa6OVw   3 1       248368            0        317            158
      green  open   .searchguard                                                          rQhAbWuLQ9iuTsZeHi_2ew   1 1            5           64          0              0
      green  open   infra-000009                                                          YN9EsULWSNaxWeeNvOs0RA   3 1       258799            0        337            168
      green  open   infra-000014                                                          YP0U6R7FQ_GVQVQZ6Yh9Ig   3 1       223788            0        292            146
      green  open   infra-000015                                                          JRBbAbEmSMqK5X40df9HbQ   3 1       224371            0        291            145
      green  open   .orphaned.2020.06.30                                                  n_xQC2dWQzConkvQqei3YA   3 1            9            0          0              0
      green  open   infra-000007                                                          llkkAVSzSOmosWTSAJM_hg   3 1       228584            0        296            148
      green  open   infra-000005                                                          d9BoGQdiQASsS3BBFm2iRA   3 1       227987            0        297            148
      green  open   .kibana.647a750f1787408bf50088234ec0edd5a6a9b2ac                      l911Z8dSI23py6GDtyJrA    1 1            5            4          0              0
      green  open   project.ui.29cb9680-864d-43b2-a6cf-134c837d6f0c.2020.06.30            5A_YdRlAT3m1Z-vbqBuGWA   3 1           24            0          0              0
      green  open   infra-000003                                                          1-goREK1QUKlQPAIVkWVaQ   3 1       226719            0        295            147
      green  open   .security                                                             zeT65uOuRTKZMjg_bbUc1g   1 1            5            0          0              0
      green  open   .kibana-377444158_kubeadmin                                           wvMhDwJkR-mRZQO84K0gUQ   3 1            1            0          0              0
      green  open   infra-000006                                                          5H-KBSXGQKiO7hdapDE23g   3 1       226676            0        295            147
      green  open   project.nw.6233ad57-aff0-4d5a-976f-370636f47b11.2020.06.30            dtc6J-nLSCC59EygeV41RQ   3 1           10            0          0              0
      green  open   infra-000001                                                          eH53BQ-bSxSWR5xYZB6lVg   3 1       341800            0        443            220
      green  open   .kibana-6                                                             RVp7TemSSemGJcsSUmuf3A   1 1            4            0          0              0
      green  open   infra-000011                                                          J7XWBauWSTe0jnzX02fU6A   3 1       226100            0        293            146
      green  open   app-000001                                                            axSAFfONQDmKwatkjPXdtw   3 1       103186            0        126             57
      green  open   infra-000016                                                          m9c1iRLtStWSF1GopaRyCg   3 1        13685            0         19              9
      green  open   infra-000002                                                          Hz6WvINtTvKcQzw-ewmbYg   3 1       228994            0        296            148
      green  open   project.qt.2c05acbd-bc12-4275-91ab-84d180b53505.2020.06.30            MUm3eFJjSPKQOJWoHskKqw   3 1        12262            0         14              7
      green  open   infra-000013                                                          KR9mMFUpQl-jraYtanyIGw   3 1       228166            0        298            148
      green  open   audit-000001                                                          eERqLdLmQOiQDFES1LBATQ   3 1            0            0          0              0
    5. Verify that the log collector is updated to 4.6:

      $ oc get ds fluentd -o json --  output=jsonpath='{.spec.template.spec.initContainers[*].name}'

      You should see a fluentd-init container.

      Example output
      fluentd-init
    6. Verify that the log visualizer is updated to 4.6 using the Kibana CRD:

      $ oc get kibana kibana -o json

      You should see a Kibana pod with the ready status:

      Example output with a ready Kibana pod
      [
      {
      "clusterCondition": {
      "kibana-5fdd766ffd-nb2jj": [
      {
      "lastTransitionTime": "2020-06-30T14:11:07Z",
      "reason": "ContainerCreating",
      "status": "True",
      "type": ""
      },
      {
      "lastTransitionTime": "2020-06-30T14:11:07Z",
      "reason": "ContainerCreating",
      "status": "True",
      "type": ""
      }
      ]
      },
      "deployment": "kibana",
      "pods": {
      "failed": [],
      "notReady": []
      "ready": []
      },
      "replicaSets": [
      "kibana-5fdd766ffd"
      ],
      "replicas": 1
      }
      ]
    7. Verify the Curator is updated to 4.6:

      $ oc get cronjob -o name
      Example output
      cronjob.batch/curator
      cronjob.batch/elasticsearch-im-app
      cronjob.batch/elasticsearch-im-audit
      cronjob.batch/elasticsearch-im-infra

      You should see the elasticsearch-delete-* and elasticsearch-rollover-* cronjobs approximately 30 minutes after an installation or update.

Post-update tasks

If you use the Log Forwarding API to forward logs, after the Elasticsearch Operator and Cluster logging Operator are fully updated to 4.6, you must replace your LogForwarding custom resource (CR) with a ClusterLogForwarder CR.

Updating cluster logging from 4.5

After updating the OpenShift Container Platform cluster, you can update cluster logging from 4.5 to 4.6 by changing the subscription for the Elasticsearch Operator and the Cluster logging Operator.

Updating from 4.5:

  • You must update the Elasticsearch Operator before updating the Cluster logging Operator.

  • You must update both the Elasticsearch Operator and the Cluster logging Operator.

    Kibana is unusable when the Elasticsearch Operator has been updated but the Cluster logging Operator has not been updated.

    If you update the Cluster logging Operator before the Elasticsearch Operator, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, delete the Cluster logging Operator pod. When the Cluster logging Operator pod redeploys, the Kibana CR is created.

Procedure
  1. Update the Elasticsearch Operator:

    1. From the OpenShift web console, click OperatorsInstalled Operators.

    2. Select the openshift-operators-redhat project.

    3. If the Elasticsearch Operator is installed on the openshift-logging namespace, uninstall the Elasticsearch Operator.

      This uninstall action does not delete the existing data, volumes, and so on.

    4. Click OperatorsOperatorHub. Search for and select OpenShift Elasticsearch Operator.

    5. Click Install + The Install operator page opens.

    6. For Update Channel, make sure 4.6 is selected.

    7. For Installed Namepsace, select openshift-operators-redhat namespace.

    8. Click Install.

    9. Wait for a few seconds, then click OperatorsInstalled Operators.

      The Elasticsearch Operator is shown as 4.6. For example:

      Elasticsearch Operator
      4.6.0-202007012112.p0 provided
      by Red Hat, Inc

      Wait for the Status field to report Succeeded.

  2. Update the Cluster logging Operator:

    1. From the OpenShift web console, click OperatorsInstalled Operators.

    2. Select the openshift-logging project.

    3. Click Cluster logging.

    4. Click SubscriptionChannel.

    5. In the Change Subscription Update Channel window, select 4.6.

    6. On the YAML tab, set the retention policy in the clusterlogging custom resource (CR) as follows. You must specify the length of time that Elasticsearch should retain each log source. Enter an integer for the number of days, for example, 7d for seven days. The application maxAge can be less than or equal to 7d. The infra maxAge and audit maxAge must be 1h, and you must explicitly set this value to override the default. Logs older than the maxAge are deleted. You must specify a retention policy for each log source or the Elasticsearch indices will not be created for that source.

      kind: "Clusterlogging"
      metadata:
        name: "instance"
        namespace: "openshift-logging"
      spec:
        managementState: "Managed"
        logStore:
          type: "elasticsearch"
          retentionPolicy:
            application:
              maxAge: 7d
            infra:
              maxAge: 1h
            audit:
              maxAge: 1h
      ...
    7. Click Save.

    8. Wait for a few seconds, then click OperatorsInstalled Operators.

      The Cluster logging Operator is shown as 4.6. For example:

      Cluster logging
      4.6.0-202007012112.p0 provided
      by Red Hat, Inc

      Wait for the Status field to report Succeeded.

  3. Check the logging components:

    1. Ensure that all Elasticsearch pods are in the Ready status:

      $ oc get pod -n openshift-logging --selector component=elasticsearch
      Example output
      NAME                                            READY   STATUS    RESTARTS   AGE
      elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk   2/2     Running   0          31m
      elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk   2/2     Running   0          30m
      elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc     2/2     Running   0          29m
    2. Ensure that the Elasticsearch cluster is healthy:

      $ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_health
      {
        "cluster_name" : "elasticsearch",
        "status" : "green",
      }
      ....
      
    3. Ensure that the Elasticsearch cron jobs are created:

      $ oc project openshift-logging
      $ oc get cron job
      Example output
      NAME                           SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
      curator                        */10 * * * *   False     0                        109s
      elasticsearch-im-app           */15 * * * *   False     1 19s                    107s
      elasticsearch-im-audit         */15 * * * *   False     1 19s                    107s
      elasticsearch-im-infra         */15 * * * *   False     1 19s                    107s
    4. Verify that the log store is updated to 4.6 and the indices are green:

      $ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices

      You should see the app-0000x, infra-0000x, audit-0000x, .security indices.

      It takes approximately 15 minutes for the indices to be updated.

      Example output with indices in a green status
      Tue Jun 30 14:30:54 UTC 2020
      health status index                                                                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
      green  open   infra-000008                                                          bnBvUFEXTWi92z3zWAzieQ   3 1       222195            0        289            144
      green  open   infra-000004                                                          rtDSzoqsSl6saisSK7Au1Q   3 1       226717            0        297            148
      green  open   infra-000012                                                          RSf_kUwDSR2xEuKRZMPqZQ   3 1       227623            0        295            147
      green  open   .kibana_7                                                             1SJdCqlZTPWlIAaOUd78yg   1 1            4            0          0              0
      green  open   .operations.2020.06.30                                                aOHMYOa3S_69NJFh2t3yrQ   3 1      4206118            0       8998           4499
      green  open   project.local-storage.d5c8a3d6-30a3-4512-96df-67c537540072.2020.06.30 O_Uldg2wS5K_L6FyqWxOZg   3 1        91052            0        135             67
      green  open   infra-000010                                                          iXwL3bnqTuGEABbUDa6OVw   3 1       248368            0        317            158
      green  open   .searchguard                                                          rQhAbWuLQ9iuTsZeHi_2ew   1 1            5           64          0              0
      green  open   infra-000009                                                          YN9EsULWSNaxWeeNvOs0RA   3 1       258799            0        337            168
      green  open   infra-000014                                                          YP0U6R7FQ_GVQVQZ6Yh9Ig   3 1       223788            0        292            146
      green  open   infra-000015                                                          JRBbAbEmSMqK5X40df9HbQ   3 1       224371            0        291            145
      green  open   .orphaned.2020.06.30                                                  n_xQC2dWQzConkvQqei3YA   3 1            9            0          0              0
      green  open   infra-000007                                                          llkkAVSzSOmosWTSAJM_hg   3 1       228584            0        296            148
      green  open   infra-000005                                                          d9BoGQdiQASsS3BBFm2iRA   3 1       227987            0        297            148
      green  open   .kibana.647a750f1787408bf50088234ec0edd5a6a9b2ac                      l911Z8dSI23py6GDtyJrA    1 1            5            4          0              0
      green  open   project.ui.29cb9680-864d-43b2-a6cf-134c837d6f0c.2020.06.30            5A_YdRlAT3m1Z-vbqBuGWA   3 1           24            0          0              0
      green  open   infra-000003                                                          1-goREK1QUKlQPAIVkWVaQ   3 1       226719            0        295            147
      green  open   .security                                                             zeT65uOuRTKZMjg_bbUc1g   1 1            5            0          0              0
      green  open   .kibana-377444158_kubeadmin                                           wvMhDwJkR-mRZQO84K0gUQ   3 1            1            0          0              0
      green  open   infra-000006                                                          5H-KBSXGQKiO7hdapDE23g   3 1       226676            0        295            147
      green  open   project.nw.6233ad57-aff0-4d5a-976f-370636f47b11.2020.06.30            dtc6J-nLSCC59EygeV41RQ   3 1           10            0          0              0
      green  open   infra-000001                                                          eH53BQ-bSxSWR5xYZB6lVg   3 1       341800            0        443            220
      green  open   .kibana-6                                                             RVp7TemSSemGJcsSUmuf3A   1 1            4            0          0              0
      green  open   infra-000011                                                          J7XWBauWSTe0jnzX02fU6A   3 1       226100            0        293            146
      green  open   app-000001                                                            axSAFfONQDmKwatkjPXdtw   3 1       103186            0        126             57
      green  open   infra-000016                                                          m9c1iRLtStWSF1GopaRyCg   3 1        13685            0         19              9
      green  open   infra-000002                                                          Hz6WvINtTvKcQzw-ewmbYg   3 1       228994            0        296            148
      green  open   project.qt.2c05acbd-bc12-4275-91ab-84d180b53505.2020.06.30            MUm3eFJjSPKQOJWoHskKqw   3 1        12262            0         14              7
      green  open   infra-000013                                                          KR9mMFUpQl-jraYtanyIGw   3 1       228166            0        298            148
      green  open   audit-000001                                                          eERqLdLmQOiQDFES1LBATQ   3 1            0            0          0              0
    5. Verify that the log collector is updated to 4.6:

      $ oc get ds fluentd -o json --  output=jsonpath='{.spec.template.spec.initContainers[*].name}'

      You should see a fluentd-init container.

      Example output
      fluentd-init
    6. Verify that the log visualizer is updated to 4.6 using the Kibana CRD:

      $ oc get kibana kibana -o json

      You should see a Kibana pod with the ready status:

      Example output with a ready Kibana pod
      [
      {
      "clusterCondition": {
      "kibana-5fdd766ffd-nb2jj": [
      {
      "lastTransitionTime": "2020-06-30T14:11:07Z",
      "reason": "ContainerCreating",
      "status": "True",
      "type": ""
      },
      {
      "lastTransitionTime": "2020-06-30T14:11:07Z",
      "reason": "ContainerCreating",
      "status": "True",
      "type": ""
      }
      ]
      },
      "deployment": "kibana",
      "pods": {
      "failed": [],
      "notReady": []
      "ready": []
      },
      "replicaSets": [
      "kibana-5fdd766ffd"
      ],
      "replicas": 1
      }
      ]
    7. Verify the Curator is updated to 4.6:

      $ oc get cronjob -o name
      Example output
      cronjob.batch/curator
      cronjob.batch/elasticsearch-im-app
      cronjob.batch/elasticsearch-im-audit
      cronjob.batch/elasticsearch-im-infra

      You should see the elasticsearch-delete-* and elasticsearch-rollover-* cronjobs approximately 30 minutes after an installation or update.

Post-update tasks

If you use the Log Forwarding API to forward logs, after the Elasticsearch Operator and Cluster logging Operator are fully updated to 4.6, you must replace your LogForwarding custom resource (CR) with a ClusterLogForwarder CR.