$ oc label pvc --all -n openshift-logging logging-cluster=elasticsearch
After updating the OpenShift Container Platform cluster from 4.4 to 4.5, you can then update the Elasticsearch Operator and Cluster logging Operator from 4.4 to 4.5.
Cluster logging 4.5 introduces a new Elasticsearch version, Elasticsearch 6.8.1, and an enhanced security plug-in, Open Distro for Elasticsearch. The new Elasticsearch version introduces a new Elasticsearch data model, where the Elasticsearch data is indexed only by type: infrastructure, application, and audit. Previously, data was indexed by type (infrastructure and application) and project.
Because of the new data model, the update does not migrate existing custom Kibana index patterns and visualizations into the new version. You must re-create your Kibana index patterns and visualizations to match the new indices after updating. |
Due to the nature of these changes, you are not required to update your cluster logging to 4.5. However, when you update to OpenShift Container Platform 4.6, you must update cluster logging to 4.6 at that time.
After updating the OpenShift Container Platform cluster, you can update cluster logging from 4.4 to 4.5 by changing the subscription for the Elasticsearch Operator and the Cluster logging Operator.
When you update:
You must update the Elasticsearch Operator before updating the Cluster logging Operator.
You must update both the Elasticsearch Operator and the Cluster logging Operator.
Kibana is unusable when the Elasticsearch Operator has been updated but the Cluster logging Operator has not been updated.
If you update the Cluster logging Operator before the Elasticsearch Operator, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, delete the Cluster logging Operator pod. When the Cluster logging Operator pod redeploys, the Kibana CR is created.
If your cluster logging version is prior to 4.4, you must upgrade cluster logging to 4.4 before updating to 4.5. |
Update the OpenShift Container Platform cluster from 4.4 to 4.5.
Make sure the cluster logging status is healthy:
All pods are ready
.
The Elasticsearch cluster is healthy.
Back up your Elasticsearch and Kibana data.
If your internal Elasticsearch instance uses persistent volume claims (PVCs), the PVCs must contain a logging-cluster:elasticsearch
label. Without the label, during the upgrade the garbage collection process removes those PVCs and the Elasticsearch operator creates new PVCs.
If you are updating from an OpenShift Container Platform version prior to version 4.4.30, you must manually add the label to the Elasticsearch PVCs.
For example, you can use the following command to add a label to all the Elasticsearch PVCs:
$ oc label pvc --all -n openshift-logging logging-cluster=elasticsearch
After OpenShift Container Platform 4.4.30, the Elasticsearch operator automatically adds the label to the PVCs.
Update the Elasticsearch Operator:
From the web console, click Operators → Installed Operators.
Select the openshift-operators-redhat
project.
Click the Elasticsearch Operator.
Click Subscription → Channel.
In the Change Subscription Update Channel window, select 4.5 and click Save.
Wait for a few seconds, then click Operators → Installed Operators.
The Elasticsearch Operator is shown as 4.5. For example:
Elasticsearch Operator
4.5.0-202007012112.p0 provided
by Red Hat, Inc
Wait for the Status field to report Succeeded.
Update the Cluster logging Operator:
From the web console, click Operators → Installed Operators.
Select the openshift-logging
project.
Click the Cluster logging Operator.
Click Subscription → Channel.
In the Change Subscription Update Channel window, select 4.5 and click Save.
Wait for a few seconds, then click Operators → Installed Operators.
The Cluster logging Operator is shown as 4.5. For example:
Cluster logging
4.5.0-202007012112.p0 provided
by Red Hat, Inc
Wait for the Status field to report Succeeded.
Check the logging components:
Ensure that all Elasticsearch pods are in the Ready status:
$ oc get pod -n openshift-logging --selector component=elasticsearch
NAME READY STATUS RESTARTS AGE
elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m
elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m
elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
Ensure that the Elasticsearch cluster is healthy:
$ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- es_cluster_health
{
"cluster_name" : "elasticsearch",
"status" : "green",
}
...
Ensure that the Elasticsearch cron jobs are created:
$ oc project openshift-logging
$ oc get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
elasticsearch-im-app */15 * * * * False 0 <none> 56s
elasticsearch-im-audit */15 * * * * False 0 <none> 56s
elasticsearch-im-infra */15 * * * * False 0 <none> 56s
Verify that the log store is updated to 4.5 and the indices are green
:
$ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices
Verify that the output includes the app-00000x
, infra-00000x
, audit-00000x
, .security
indices.
Tue Jun 30 14:30:54 UTC 2020
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144
green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148
green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147
green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0
green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158
green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168
green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146
green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145
green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0
green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148
green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148
green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147
green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0
green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0
green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147
green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220
green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0
green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146
green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57
green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9
green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148
green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148
green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0
Verify that the log collector is updated to 4.5:
$ oc get ds fluentd -o json | grep fluentd-init
Verify that the output includes a fluentd-init
container:
"containerName": "fluentd-init"
Verify that the log visualizer is updated to 4.5 using the Kibana CRD:
$ oc get kibana kibana -o json
Verify that the output includes a Kibana pod with the ready
status:
[
{
"clusterCondition": {
"kibana-5fdd766ffd-nb2jj": [
{
"lastTransitionTime": "2020-06-30T14:11:07Z",
"reason": "ContainerCreating",
"status": "True",
"type": ""
},
{
"lastTransitionTime": "2020-06-30T14:11:07Z",
"reason": "ContainerCreating",
"status": "True",
"type": ""
}
]
},
"deployment": "kibana",
"pods": {
"failed": [],
"notReady": []
"ready": []
},
"replicaSets": [
"kibana-5fdd766ffd"
],
"replicas": 1
}
]
Verify the Curator is updated to 4.5:
$ oc get cronjob -o name
cronjob.batch/curator
cronjob.batch/elasticsearch-delete-app
cronjob.batch/elasticsearch-delete-audit
cronjob.batch/elasticsearch-delete-infra
cronjob.batch/elasticsearch-rollover-app
cronjob.batch/elasticsearch-rollover-audit
cronjob.batch/elasticsearch-rollover-infra
Verify that the output includes the elasticsearch-delete-*
and elasticsearch-rollover-*
indices.
If you use Kibana, after the Elasticsearch Operator and Cluster logging Operator are fully updated to 4.5, you must recreate your Kibana index patterns and visualizations. Because of changes in the security plug-in, the cluster logging upgrade does not automatically create index patterns.
An index pattern defines the Elasticsearch indices that you want to visualize. To explore and visualize data in Kibana, you must create an index pattern.
A user must have the cluster-admin
role, the cluster-reader
role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin
user has proper permissions to view these indices.
If you can view the pods and logs in the default
, kube-
and openshift-
projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions:
$ oc auth can-i get pods/log -n <project>
yes
The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the |
Elasticsearch documents must be indexed before you can create index patterns. This is done automatically, but it might take a few minutes in a new or updated cluster.
To define index patterns and create visualizations in Kibana:
In the OpenShift Container Platform console, click the Application Launcher and select logging.
Create your Kibana index patterns by clicking Management → Index Patterns → Create index pattern:
Each user must manually create index patterns when logging into Kibana the first time in order to see logs for their projects. Users must create an index pattern named app
and use the @timestamp
time field to view their container logs.
Each admin user must create index patterns when logged into Kibana the first time for the app
, infra
, and audit
indices using the @timestamp
time field.
Create Kibana Visualizations from the new index patterns.