This is a cache of https://docs.okd.io/latest/observability/logging/logging_release_notes/logging-5-8-release-notes.html. It is a snapshot of the page at 2024-11-23T19:11:09.281+0000.
Logging 5.8 - Logging | Observability | OKD 4
×

Logging is provided as an installable component, with a distinct release cycle from the core OKD. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where x.y represents the major and minor version of logging you have installed. For example, stable-5.7.

Logging 5.8.4

Bug fixes

  • Before this update, the developer console’s logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, all supported OCP versions ensure correct namespace inclusion. (LOG-4905)

  • Before this update, the Cluster Logging Operator deployed ClusterRoles supporting LokiStack deployments only when the default log output was LokiStack. With this update, the roles are split into two groups: read and write. The write roles deploys based on the setting of the default log storage, just like all the roles used to do before. The read roles deploys based on whether the logging console plugin is active. (LOG-4987)

  • Before this update, multiple ClusterLogForwarders defining the same input receiver name had their service endlessly reconciled because of changing ownerReferences on one service. With this update, each receiver input will have its own service named with the convention of <CLF.Name>-<input.Name>. (LOG-5009)

  • Before this update, the ClusterLogForwarder did not report errors when forwarding logs to cloudwatch without a secret. With this update, the following error message appears when forwarding logs to cloudwatch without a secret: secret must be provided for cloudwatch output. (LOG-5021)

  • Before this update, the log_forwarder_input_info included application, infrastructure, and audit input metric points. With this update, http is also added as a metric point. (LOG-5043)

Logging 5.8.3

Bug fixes

  • Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration. (LOG-4969)

  • Before this update, Loki outputs configured without a valid URL caused the collector pods to crash. With this update, outputs are subject to URL validation, resolving the issue. (LOG-4822)

  • Before this update the Cluster Logging Operator would generate collector configuration fields for outputs that did not specify a secret to use the service account bearer token. With this update, an output does not require authentication, resolving the issue. (LOG-4962)

  • Before this update, the tls.insecureSkipVerify field of an output was not set to a value of true without a secret defined. With this update, a secret is no longer required to set this value. (LOG-4963)

  • Before this update, output configurations allowed the combination of an insecure (HTTP) URL with TLS authentication. With this update, outputs configured for TLS authentication require a secure (HTTPS) URL. (LOG-4893)

Logging 5.8.2

Bug fixes

  • Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. (LOG-4890)

  • Before this update, the developer console logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, namespace inclusion has been corrected, resolving the issue. (LOG-4947)

  • Before this update, the logging view plugin of the OKD web console did not allow for custom node placement and tolerations. With this update, defining custom node placements and tolerations has been added to the logging view plugin of the OKD web console. (LOG-4912)

Logging 5.8.1

Enhancements

Log Collection

  • With this update, while configuring Vector as a collector, you can add logic to the Red Hat OpenShift Logging Operator to use a token specified in the secret in place of the token associated with the service account. (LOG-4780)

  • With this update, the BoltDB Shipper Loki dashboards are now renamed to Index dashboards. (LOG-4828)

Bug fixes

  • Before this update, the ClusterLogForwarder created empty indices after enabling the parsing of JSON logs, even when the rollover conditions were not met. With this update, the ClusterLogForwarder skips the rollover when the write-index is empty. (LOG-4452)

  • Before this update, the Vector set the default log level incorrectly. With this update, the correct log level is set by improving the enhancement of regular expression, or regexp, for log level detection. (LOG-4480)

  • Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. (LOG-4683)

  • Before this update, Fluentd collector pods were in a CrashLoopBackOff state due to binding of the Prometheus server on IPv6 clusters. With this update, the collectors work properly on IPv6 clusters. (LOG-4706)

  • Before this update, the Red Hat OpenShift Logging Operator would undergo numerous reconciliations whenever there was a change in the ClusterLogForwarder. With this update, the Red Hat OpenShift Logging Operator disregards the status changes in the collector daemonsets that triggered the reconciliations. (LOG-4741)

  • Before this update, the Vector log collector pods were stuck in the CrashLoopBackOff state on IBM Power machines. With this update, the Vector log collector pods start successfully on IBM Power architecture machines. (LOG-4768)

  • Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Fluentd collector pods. With this update, the log collector service account is used by default for authentication, using the associated token and ca.crt. (LOG-4791)

  • Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Vector collector pods. With this update, the log collector service account is used by default for authentication and also using the associated token and ca.crt. (LOG-4852)

  • Before this fix, IPv6 addresses would not be parsed correctly after evaluating a host or multiple hosts for placeholders. With this update, IPv6 addresses are correctly parsed. (LOG-4811)

  • Before this update, it was necessary to create a ClusterRoleBinding to collect audit permissions for HTTP receiver inputs. With this update, it is not necessary to create the ClusterRoleBinding because the endpoint already depends upon the cluster certificate authority. (LOG-4815)

  • Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. (LOG-4836)

  • Before this update, while removing the inputs.receiver section in the ClusterLogForwarder, the HTTP input services and its associated secrets were not deleted. With this update, the HTTP input resources are deleted when not needed. (LOG-4612)

  • Before this update, the ClusterLogForwarder indicated validation errors in the status, but the outputs and the pipeline status did not accurately reflect the specific issues. With this update, the pipeline status displays the validation failure reasons correctly in case of misconfigured outputs, inputs, or filters. (LOG-4821)

  • Before this update, changing a LogQL query that used controls such as time range or severity changed the label matcher operator defining it like a regular expression. With this update, regular expression operators remain unchanged when updating the query. (LOG-4841)

Logging 5.8.0

Deprecation notice

In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OKD. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward.

Enhancements

Log Collection

  • With this update, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create the LogFileMetricExporter CR, you may see a No datapoints found message in the OKD web console dashboard for Produced Logs. (LOG-3819)

  • With this update, you can deploy multiple, isolated, and RBAC-protected ClusterLogForwarder custom resource (CR) instances in any namespace. This allows independent groups to forward desired logs to any destination while isolating their configuration from other collector deployments. (LOG-1343)

    In order to support multi-cluster log forwarding in additional namespaces other than the openshift-logging namespace, you must update the Red Hat OpenShift Logging Operator to watch all namespaces. This functionality is supported by default in new Red Hat OpenShift Logging Operator version 5.8 installations.

  • With this update, you can use the flow control or rate limiting mechanism to limit the volume of log data that can be collected or forwarded by dropping excess log records. The input limits prevent poorly-performing containers from overloading the Logging and the output limits put a ceiling on the rate of logs shipped to a given data store. (LOG-884)

  • With this update, you can configure the log collector to look for HTTP connections and receive logs as an HTTP server, also known as a webhook. (LOG-4562)

  • With this update, you can configure audit polices to control which Kubernetes and OpenShift API server events are forwarded by the log collector. (LOG-3982)

Log Storage

  • With this update, LokiStack administrators can have more fine-grained control over who can access which logs by granting access to logs on a namespace basis. (LOG-3841)

  • With this update, the Loki Operator introduces PodDisruptionBudget configuration on LokiStack deployments to ensure normal operations during OKD cluster restarts by keeping ingestion and the query path available. (LOG-3839)

  • With this update, the reliability of existing LokiStack installations are seamlessly improved by applying a set of default Affinity and Anti-Affinity policies. (LOG-3840)

  • With this update, you can manage zone-aware data replication as an administrator in LokiStack, in order to enhance reliability in the event of a zone failure. (LOG-3266)

  • With this update, a new supported small-scale LokiStack size of 1x.extra-small is introduced for OKD clusters hosting a few workloads and smaller ingestion volumes (up to 100GB/day). (LOG-4329)

  • With this update, the LokiStack administrator has access to an official Loki dashboard to inspect the storage performance and the health of each component. (LOG-4327)

Log Console

  • With this update, you can enable the Logging Console Plugin when Elasticsearch is the default Log Store. (LOG-3856)

  • With this update, OKD application owners can receive notifications for application log-based alerts on the OKD web console Developer perspective for OKD version 4.14 and later. (LOG-3548)

Known Issues

  • Currently, Splunk log forwarding might not work after upgrading to version 5.8 of the Red Hat OpenShift Logging Operator. This issue is caused by transitioning from OpenSSL version 1.1.1 to version 3.0.7. In the newer OpenSSL version, there is a default behavior change, where connections to TLS 1.2 endpoints are rejected if they do not expose the RFC 5746 extension.

    As a workaround, enable TLS 1.3 support on the TLS terminating load balancer in front of the Splunk HEC (HTTP Event Collector) endpoint. Splunk is a third-party system and this should be configured from the Splunk end.

  • Currently, there is a flaw in handling multiplexed streams in the HTTP/2 protocol, where you can repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This created extra work for the server set up and tore down the streams, resulting in a denial of service due to server resource consumption. There is currently no workaround for this issue. (LOG-4609)

  • Currently, when using FluentD as the collector, the collector pod cannot start on the OKD IPv6-enabled cluster. The pod logs produce the fluentd pod [error]: unexpected error error_class=SocketError error="getaddrinfo: Name or service not known error. There is currently no workaround for this issue. (LOG-4706)

  • Currently, the log alert is not available on an IPv6-enabled cluster. There is currently no workaround for this issue. (LOG-4709)

  • Currently, must-gather cannot gather any logs on a fips-enabled cluster, because the required OpenSSL library is not available in the cluster-logging-rhel9-operator. There is currently no workaround for this issue. (LOG-4403)

  • Currently, when deploying the logging version 5.8 on a fips-enabled cluster, the collector pods cannot start and are stuck in CrashLoopBackOff status, while using FluentD as a collector. There is currently no workaround for this issue. (LOG-3933)