This is a cache of https://docs.okd.io/4.10/logging/cluster-logging-upgrading.html. It is a snapshot of the page at 2024-11-17T22:29:11.134+0000.
Updating <strong>logging</strong> | <strong>logging</strong> | OKD 4.10
×

Supported Versions

For version compatibility and support information, see Red Hat OpenShift Container Platform Life Cycle Policy

To upgrade from cluster logging in OKD version 4.6 and earlier to OpenShift logging 5.x, you update the OKD cluster to version 4.7 or 4.8. Then, you update the following operators:

  • From Elasticsearch Operator 4.x to OpenShift Elasticsearch Operator 5.x

  • From Cluster logging Operator 4.x to Red Hat OpenShift logging Operator 5.x

To upgrade from a previous version of OpenShift logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift logging Operator to their current versions.

Updating logging to the current version

To update logging to the current version, you change the subscriptions for the OpenShift Elasticsearch Operator and Red Hat OpenShift logging Operator.

You must update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift logging Operator. You must also update both Operators to the same version.

If you update the Operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, you delete the Red Hat OpenShift logging Operator pod. When the Red Hat OpenShift logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again.

Prerequisites
Procedure
  1. Update the OpenShift Elasticsearch Operator:

    1. From the web console, click OperatorsInstalled Operators.

    2. Select the openshift-Operators-redhat project.

    3. Click the OpenShift Elasticsearch Operator.

    4. Click SubscriptionChannel.

    5. In the Change Subscription Update Channel window, select stable-5.x and click Save.

    6. Wait for a few seconds, then click OperatorsInstalled Operators.

    7. Verify that the OpenShift Elasticsearch Operator version is 5.x.x.

    8. Wait for the Status field to report Succeeded.

  2. Update the Red Hat OpenShift logging Operator:

    1. From the web console, click OperatorsInstalled Operators.

    2. Select the openshift-logging project.

    3. Click the Red Hat OpenShift logging Operator.

    4. Click SubscriptionChannel.

    5. In the Change Subscription Update Channel window, select stable-5.x and click Save.

    6. Wait for a few seconds, then click OperatorsInstalled Operators.

    7. Verify that the Red Hat OpenShift logging Operator version is 5.y.z

    8. Wait for the Status field to report Succeeded.

  3. Check the logging components:

    1. Ensure that all Elasticsearch pods are in the Ready status:

      $ oc get pod -n openshift-logging --selector component=elasticsearch
      Example output
      NAME                                            READY   STATUS    RESTARTS   AGE
      elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk   2/2     Running   0          31m
      elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk   2/2     Running   0          30m
      elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc     2/2     Running   0          29m
    2. Ensure that the Elasticsearch cluster is healthy:

      $ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health
      {
        "cluster_name" : "elasticsearch",
        "status" : "green",
      }
    3. Ensure that the Elasticsearch cron jobs are created:

      $ oc project openshift-logging
      $ oc get cronjob
      NAME                     SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
      elasticsearch-im-app     */15 * * * *   False     0        <none>          56s
      elasticsearch-im-audit   */15 * * * *   False     0        <none>          56s
      elasticsearch-im-infra   */15 * * * *   False     0        <none>          56s
    4. Verify that the log store is updated to 5.x and the indices are green:

      $ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices
    5. Verify that the output includes the app-00000x, infra-00000x, audit-00000x, .security indices.

      Sample output with indices in a green status
      Tue Jun 30 14:30:54 UTC 2020
      health status index                                                                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
      green  open   infra-000008                                                          bnBvUFEXTWi92z3zWAzieQ   3 1       222195            0        289            144
      green  open   infra-000004                                                          rtDSzoqsSl6saisSK7Au1Q   3 1       226717            0        297            148
      green  open   infra-000012                                                          RSf_kUwDSR2xEuKRZMPqZQ   3 1       227623            0        295            147
      green  open   .kibana_7                                                             1SJdCqlZTPWlIAaOUd78yg   1 1            4            0          0              0
      green  open   infra-000010                                                          iXwL3bnqTuGEABbUDa6OVw   3 1       248368            0        317            158
      green  open   infra-000009                                                          YN9EsULWSNaxWeeNvOs0RA   3 1       258799            0        337            168
      green  open   infra-000014                                                          YP0U6R7FQ_GVQVQZ6Yh9Ig   3 1       223788            0        292            146
      green  open   infra-000015                                                          JRBbAbEmSMqK5X40df9HbQ   3 1       224371            0        291            145
      green  open   .orphaned.2020.06.30                                                  n_xQC2dWQzConkvQqei3YA   3 1            9            0          0              0
      green  open   infra-000007                                                          llkkAVSzSOmosWTSAJM_hg   3 1       228584            0        296            148
      green  open   infra-000005                                                          d9BoGQdiQASsS3BBFm2iRA   3 1       227987            0        297            148
      green  open   infra-000003                                                          1-goREK1QUKlQPAIVkWVaQ   3 1       226719            0        295            147
      green  open   .security                                                             zeT65uOuRTKZMjg_bbUc1g   1 1            5            0          0              0
      green  open   .kibana-377444158_kubeadmin                                           wvMhDwJkR-mRZQO84K0gUQ   3 1            1            0          0              0
      green  open   infra-000006                                                          5H-KBSXGQKiO7hdapDE23g   3 1       226676            0        295            147
      green  open   infra-000001                                                          eH53BQ-bSxSWR5xYZB6lVg   3 1       341800            0        443            220
      green  open   .kibana-6                                                             RVp7TemSSemGJcsSUmuf3A   1 1            4            0          0              0
      green  open   infra-000011                                                          J7XWBauWSTe0jnzX02fU6A   3 1       226100            0        293            146
      green  open   app-000001                                                            axSAFfONQDmKwatkjPXdtw   3 1       103186            0        126             57
      green  open   infra-000016                                                          m9c1iRLtStWSF1GopaRyCg   3 1        13685            0         19              9
      green  open   infra-000002                                                          Hz6WvINtTvKcQzw-ewmbYg   3 1       228994            0        296            148
      green  open   infra-000013                                                          KR9mMFUpQl-jraYtanyIGw   3 1       228166            0        298            148
      green  open   audit-000001                                                          eERqLdLmQOiQDFES1LBATQ   3 1            0            0          0              0
    6. Verify that the log collector is updated:

      $ oc get ds collector -o json | grep collector
    7. Verify that the output includes a collectort container:

      "containerName": "collector"
    8. Verify that the log visualizer is updated to 5.x using the Kibana CRD:

      $ oc get kibana kibana -o json
    9. Verify that the output includes a Kibana pod with the ready status:

      Sample output with a ready Kibana pod
      [
      {
      "clusterCondition": {
      "kibana-5fdd766ffd-nb2jj": [
      {
      "lastTransitionTime": "2020-06-30T14:11:07Z",
      "reason": "ContainerCreating",
      "status": "True",
      "type": ""
      },
      {
      "lastTransitionTime": "2020-06-30T14:11:07Z",
      "reason": "ContainerCreating",
      "status": "True",
      "type": ""
      }
      ]
      },
      "deployment": "kibana",
      "pods": {
      "failed": [],
      "notReady": []
      "ready": []
      },
      "replicaSets": [
      "kibana-5fdd766ffd"
      ],
      "replicas": 1
      }
      ]