This is a cache of https://docs.openshift.com/container-platform/4.10/logging/v5_5/logging-5-5-getting-started.html. It is a snapshot of the page at 2024-11-22T17:10:21.118+0000.
Getting started with <strong>logging</strong> - <strong>logging</strong> 5.5 | <strong>logging</strong> | OpenShift Container Platform 4.10
×

This overview of the logging deployment process is provided for ease of reference. It is not a substitute for full documentation. For new installations, Vector and LokiStack are recommended.

As of logging version 5.5, you have the option of choosing from Fluentd or Vector collector implementations, and Elasticsearch or LokiStack as log stores. Documentation for logging is in the process of being updated to reflect these underlying component changes.

The {logging-title} is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.

Prerequisites
  • LogStore preference: Elasticsearch or LokiStack

  • Collector implementation preference: Fluentd or Vector

  • Credentials for your log forwarding outputs

Procedure

As of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.

  1. Install the Operator for the logstore you’d like to use.

    • For Elasticsearch, install the OpenShift Elasticsearch Operator.

    • For LokiStack, install the Loki Operator.

      • Create a LokiStack custom resource (CR) instance.

  2. Install the Red Hat OpenShift logging Operator.

  3. Create a Clusterlogging custom resource (CR) instance.

    1. Select your Collector Implementation.

      As of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.

  4. Create a ClusterLogForwarder custom resource (CR) instance.

  5. Create a secret for the selected output pipeline.