This is a cache of https://docs.okd.io/4.15/observability/logging/logging-6.1/log6x-about-6.1.html. It is a snapshot of the page at 2024-11-19T21:07:22.081+0000.
About <strong>logging</strong> 6.1 - <strong>logging</strong> | Observability | OKD 4.15
×

The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding.

Inputs and outputs

Inputs specify the sources of logs to be forwarded. logging provides built-in input types: application, receiver, infrastructure, and audit, which select logs from different parts of your cluster. You can also define custom inputs based on namespaces or pod labels to fine-tune log selection.

Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings.

Receiver input type

The receiver input type enables the logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog.

The ReceiverSpec defines the configuration for a receiver input.

Pipelines and filters

Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. Filters can be used to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages.

Operator behavior

The Cluster logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource:

  • When set to Managed (default), the operator actively manages the logging resources to match the configuration defined in the spec.

  • When set to Unmanaged, the operator does not take any action, allowing you to manually manage the logging components.

Validation

logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios.

Quick start

OpenShift logging supports two data models:

  • ViaQ (General Availability)

  • OpenTelemetry (Technology Preview)

You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder. ViaQ is the default data model when forwarding logs to LokiStack.

In future releases of OpenShift logging, the default data model will change from ViaQ to OpenTelemetry.

Quick start with ViaQ

To use the default ViaQ data model, follow these steps:

Prerequisites
  • Cluster administrator permissions

Procedure
  1. Install the Red Hat OpenShift logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub.

  2. Create a LokiStack custom resource (CR) in the openshift-logging namespace:

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    metadata:
      name: logging-loki
      namespace: openshift-logging
    spec:
      managementState: Managed
      size: 1x.extra-small
      storage:
        schemas:
        - effectiveDate: '2024-10-01'
          version: v13
        secret:
          name: logging-loki-s3
          type: s3
      storageClassName: gp3-csi
      tenants:
        mode: openshift-logging

    Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration.

  3. Create a service account for the collector:

    $ oc create sa collector -n openshift-logging
  4. Allow the collector’s service account to write data to the LokiStack CR:

    $ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector

    The ClusterRole resource is created automatically during the Cluster logging Operator installation and does not need to be created manually.

  5. Allow the collector’s service account to collect logs:

    $ oc project openshift-logging
    $ oc adm policy add-cluster-role-to-user collect-application-logs -z collector
    $ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector
    $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector

    The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment.

  6. Create a UIPlugin CR to enable the Log section in the Observe tab:

    apiVersion: observability.openshift.io/v1alpha1
    kind: UIPlugin
    metadata:
      name: logging
    spec:
      type: logging
      logging:
        lokiStack:
          name: logging-loki
  7. Create a ClusterLogForwarder CR to configure log forwarding:

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: collector
      namespace: openshift-logging
    spec:
      serviceAccount:
        name: collector
      outputs:
      - name: default-lokistack
        type: lokiStack
        lokiStack:
          authentication:
            token:
              from: serviceAccount
          target:
            name: logging-loki
            namespace: openshift-logging
        tls:
          ca:
            key: service-ca.crt
            configMapName: openshift-service-ca.crt
      pipelines:
      - name: default-logstore
        inputRefs:
        - application
        - infrastructure
        outputRefs:
        - default-lokistack

    The dataModel field is optional and left unset (dataModel: "") by default. This allows the Cluster logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes.

Verification
  • Verify that logs are visible in the Log section of the Observe tab in the OpenShift web console.

Quick start with OpenTelemetry

The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps:

Prerequisites
  • Cluster administrator permissions

Procedure
  1. Install the Red Hat OpenShift logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub.

  2. Create a LokiStack custom resource (CR) in the openshift-logging namespace:

    apiVersion: loki.grafana.com/v1
    kind: LokiStack
    metadata:
      name: logging-loki
      namespace: openshift-logging
    spec:
      managementState: Managed
      size: 1x.extra-small
      storage:
        schemas:
        - effectiveDate: '2024-10-01'
          version: v13
        secret:
          name: logging-loki-s3
          type: s3
      storageClassName: gp3-csi
      tenants:
        mode: openshift-logging

    Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration".

  3. Create a service account for the collector:

    $ oc create sa collector -n openshift-logging
  4. Allow the collector’s service account to write data to the LokiStack CR:

    $ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector

    The ClusterRole resource is created automatically during the Cluster logging Operator installation and does not need to be created manually.

  5. Allow the collector’s service account to collect logs:

    $ oc project openshift-logging
    $ oc adm policy add-cluster-role-to-user collect-application-logs -z collector
    $ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector
    $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector

    The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment.

  6. Create a UIPlugin CR to enable the Log section in the Observe tab:

    apiVersion: observability.openshift.io/v1alpha1
    kind: UIPlugin
    metadata:
      name: logging
    spec:
      type: logging
      logging:
        lokiStack:
          name: logging-loki
  7. Create a ClusterLogForwarder CR to configure log forwarding:

    apiVersion: observability.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: collector
      namespace: openshift-logging
      annotations:
        observability.openshift.io/tech-preview-otlp-output: "enabled" (1)
    spec:
      serviceAccount:
        name: collector
      outputs:
      - name: loki-otlp
        type: lokiStack (2)
        lokiStack:
          target:
            name: logging-loki
            namespace: openshift-logging
          dataModel: Otel (3)
          authentication:
            token:
              from: serviceAccount
        tls:
          ca:
            key: service-ca.crt
            configMapName: openshift-service-ca.crt
      pipelines:
      - name: my-pipeline
        inputRefs:
        - application
        - infrastructure
        outputRefs:
        - loki-otlp
    1 Use the annotation to enable the Otel data model, which is a Technology Preview feature.
    2 Define the output type as lokiStack.
    3 Specifies the OpenTelemetry data model.

    You cannot use lokiStack.labelKeys when dataModel is Otel. To achieve similar functionality when dataModel is Otel, refer to "Configuring LokiStack for OTLP data ingestion".

Verification
  • Verify that OTLP is functioning correctly by going to ObserveOpenShift loggingLokiStackWrites in the OpenShift web console, and checking Distributor - Structured Metadata.