This is a cache of https://docs.openshift.com/rosa/observability/logging/cluster-logging-support.html. It is a snapshot of the page at 2024-11-23T03:21:28.653+0000.
Support - Logging | Observability | Red Hat OpenShift <strong>service</strong> on AWS
×

Only the configuration options described in this documentation are supported for logging.

Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across Red Hat OpenShift service on AWS releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences.

If you must perform configurations not described in the Red Hat OpenShift service on AWS documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed.

Logging is provided as an installable component, with a distinct release cycle from the core Red Hat OpenShift service on AWS. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries.

For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days.

Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.

Logging is not:

  • A high scale log collection system

  • Security Information and Event Monitoring (SIEM) compliant

  • Historical or long term log retention or storage

  • A guaranteed log sink

  • Secure storage - audit logs are not stored by default

Supported API custom resource definitions

LokiStack development is ongoing. Not all APIs are currently supported.

Table 1. Loki API support states
CustomResourceDefinition (CRD) ApiVersion Support state

LokiStack

lokistack.loki.grafana.com/v1

Supported in 5.5

RulerConfig

rulerconfig.loki.grafana/v1

Supported in 5.7

AlertingRule

alertingrule.loki.grafana/v1

Supported in 5.7

RecordingRule

recordingrule.loki.grafana/v1

Supported in 5.7

Unsupported configurations

You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components:

  • The Elasticsearch custom resource (CR)

  • The Kibana deployment

  • The fluent.conf file

  • The Fluentd daemon set

You must set the OpenShift Elasticsearch Operator to the Unmanaged state to modify the Elasticsearch deployment files.

Explicitly unsupported cases include:

  • Configuring default log rotation. You cannot modify the default log rotation configuration.

  • Configuring the collected log location. You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log.

  • Throttling log collection. You cannot throttle down the rate at which the logs are read in by the log collector.

  • Configuring the logging collector using environment variables. You cannot use environment variables to modify the log collector.

  • Configuring how the log collector normalizes logs. You cannot modify default log normalization.

Support policy for unmanaged Operators

The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates.

While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.

An Operator can be set to an unmanaged state using the following methods:

  • Individual Operator configuration

    Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource.

    Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery.

    Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed.

  • Cluster Version Operator (CVO) overrides

    The spec.overrides parameter can be added to the CVO’s configuration to allow administrators to provide a list of overrides to the CVO’s behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set:

    Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.

    Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed.

Support exception for the Logging UI Plugin

Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on Red Hat OpenShift service on AWS 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.

Collecting logging data for Red Hat Support

When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.

You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components.

For prompt support, supply diagnostic information for both Red Hat OpenShift service on AWS and logging.

Do not use the hack/logging-dump.sh script. The script is no longer supported and does not collect data.

About the must-gather tool

The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues.

For your logging, must-gather collects the following information:

  • Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level

  • Cluster-level resources, including nodes, roles, and role bindings at the cluster level

  • OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer

When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory.

Collecting logging data

You can use the oc adm must-gather CLI command to collect information about logging.

Procedure

To collect logging information with must-gather:

  1. Navigate to the directory where you want to store the must-gather information.

  2. Run the oc adm must-gather command against the logging image:

    $ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')

    The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408.

  3. Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408
  4. Attach the compressed file to your support case on the Red Hat Customer Portal.