This is a cache of https://docs.openshift.com/container-platform/4.4/logging/config/cluster-logging-external.html. It is a snapshot of the page at 2024-11-23T01:24:13.157+0000.
Forward logs to third party systems - Configuring your cluster logging deployment | Logging | OpenShift Container Platform 4.4
×

By default, OpenShift Container Platform cluster logging sends logs to the default internal Elasticsearch logstore, defined in the ClusterLogging custom resource.

You can configure cluster logging to send logs to destinations outside of your OpenShift Container Platform cluster instead of the default Elasticsearch logstore using the following methods:

  • Sending logs using the Fluentd forward protocol. You can create a configmap to use the Fluentd forward protocol to securely send logs to an external logging aggregator that accepts the Fluent forward protocol.

  • Sending logs using syslog. You can create a configmap to use the syslog protocol to send logs to an external syslog (RFC 3164) server.

Alternatively, you can use the Log Forwarding API, currently in Technology Preview. The Log Forwarding API, which is easier to configure than the Fluentd protocol and syslog, exposes configuration for sending logs to the internal Elasticsearch logstore and to external Fluentd log aggregation solutions.

The Log Forwarding API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

The methods for forwarding logs using a configmap are deprecated and will be replaced by the Log Forwarding API in a future release.

Forwarding logs using the Fluentd forward protocol

You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator, instead of the default Elasticsearch logstore. On the OpenShift Container Platform cluster, you use the Fluentd forward protocol to send logs to a server configured to accept the protocol. You are responsible to configure the external log aggregator to receive the logs from OpenShift Container Platform.

This method for forwarding logs is deprecated in OpenShift Container Platform and will be replaced by the Log Forwarding API in a future release.

To configure OpenShift Container Platform to send logs using the Fluentd forward protocol, create a configmap called secure-forward in the openshift-logging namespace that points to an external log aggregator.

Starting with the OpenShift Container Platform 4.3, the process for using the Fluentd forward protocol has changed. You now need to create a configmap, as described below.

Additionally, you can add any certificates required by your configuration to a secret named secure-forward that will be mounted to the Fluentd Pods.

Sample secure-forward.conf
<store>
  @type forward
  <security>
    self_hostname ${hostname} # ${hostname} is a placeholder.
    shared_key "fluent-receiver"
  </security>
  transport tls
  tls_verify_hostname false           # Set false to ignore server cert hostname.

  tls_cert_path '/etc/ocp-forward/ca-bundle.crt'
  <buffer>
    @type file
    path '/var/lib/fluentd/secureforwardlegacy'
    queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }"
    chunk_limit_size "#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }"
    flush_interval "#{ENV['FORWARD_FLUSH_INTERVAL'] || '5s'}"
    flush_at_shutdown "#{ENV['FLUSH_AT_SHUTDOWN'] || 'false'}"
    flush_thread_count "#{ENV['FLUSH_THREAD_COUNT'] || 2}"
    retry_max_interval "#{ENV['FORWARD_RETRY_WAIT'] || '300'}"
    retry_forever true
    # the systemd journald 0.0.8 input plugin will just throw away records if the buffer
    # queue limit is hit - 'block' will halt further reads and keep retrying to flush the
    # buffer to the remote - default is 'exception' because in_tail handles that case
    overflow_action "#{ENV['BUFFER_QUEUE_FULL_ACTION'] || 'exception'}"
  </buffer>
  <server>
    host fluent-receiver.openshift-logging.svc  # or IP
    port 24224
  </server>
</store>
Sample secure-forward configmap based on the configuration
apiVersion: v1
data:
 secure-forward.conf: "<store>
     \ @type forward
     \ <security>
     \   self_hostname ${hostname} # ${hostname} is a placeholder.
     \   shared_key \"fluent-receiver\"
     \ </security>
     \ transport tls
     \ tls_verify_hostname false           # Set false to ignore server cert hostname.
     \ tls_cert_path '/etc/ocp-forward/ca-bundle.crt'
     \ <buffer>
     \   @type file
     \   path '/var/lib/fluentd/secureforwardlegacy'
     \   queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }\"
     \   chunk_limit_size \"#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }\"
     \   flush_interval \"#{ENV['FORWARD_FLUSH_INTERVAL'] || '5s'}\"
     \   flush_at_shutdown \"#{ENV['FLUSH_AT_SHUTDOWN'] || 'false'}\"
     \   flush_thread_count \"#{ENV['FLUSH_THREAD_COUNT'] || 2}\"
     \   retry_max_interval \"#{ENV['FORWARD_RETRY_WAIT'] || '300'}\"
     \   retry_forever true
     \   # the systemd journald 0.0.8 input plugin will just throw away records if the buffer
     \   # queue limit is hit - 'block' will halt further reads and keep retrying to flush the
     \   # buffer to the remote - default is 'exception' because in_tail handles that case
     \   overflow_action \"#{ENV['BUFFER_QUEUE_FULL_ACTION'] || 'exception'}\"
     \ </buffer>
     \ <server>
     \   host fluent-receiver.openshift-logging.svc  # or IP
     \   port 24224
     \ </server>
     </store>"
kind: configmap
metadata:
  creationTimestamp: "2020-01-15T18:56:04Z"
  name: secure-forward
  namespace: openshift-logging
  resourceVersion: "19148"
  selfLink: /api/v1/namespaces/openshift-logging/configmaps/secure-forward
  uid: 6fd83202-93ab-d851b1d0f3e8
Procedure

To configure OpenShift Container Platform to forward logs using the Fluentd forward protocol:

  1. Create a configuration file named secure-forward.conf for the forward parameters:

    1. Configure the secrets and TLS information:

       <store>
        @type forward
      
        self_hostname ${hostname} (1)
        shared_key <SECRET_STRING> (2)
      
        transport tls (3)
      
        tls_verify_hostname true (4)
        tls_cert_path <path_to_file> (5)
      1 Specify the default value of the auto-generated certificate common name (CN).
      2 Enter the Shared key between nodes
      3 Specify tls to enable TLS validation.
      4 Set to true to verify the server cert hostname. Set to false to ignore server cert hostname.
      5 Specify the path to private CA certificate file as /etc/ocp-forward/ca_cert.pem.

      To use mTLS, see the Fluentd documentation for information about client certificate, key parameters, and other settings.

    2. Configure the name, host, and port for your external Fluentd server:

        <server>
          name (1)
          host (2)
          hostlabel (3)
          port (4)
        </server>
        <server> (5)
          name
          host
        </server>
      1 Optionally, enter a name for this server.
      2 Specify the host name or IP of the server.
      3 Specify the host label of the server.
      4 Specify the port of the server.
      5 Optionally, add additional servers. If you specify two or more servers, forward uses these server nodes in a round-robin order.

      For example:

        <server>
          name externalserver1
          host 192.168.1.1
          hostlabel externalserver1.example.com
          port 24224
        </server>
        <server>
          name externalserver2
          host externalserver2.example.com
          port 24224
        </server>
        </store>
  2. Create a configmap named secure-forward in the openshift-logging namespace from the configuration file:

    $ oc create configmap secure-forward --from-file=secure-forward.conf -n openshift-logging
  3. Optional: Import any secrets required for the receiver:

    $ oc create secret generic secure-forward --from-file=<arbitrary-name-of-key1>=cert_file_from_fluentd_receiver --from-literal=shared_key=value_from_fluentd_receiver

    For example:

    $ oc create secret generic secure-forward --from-file=ca-bundle.crt=ca-for-fluentd-receiver/ca.crt --from-literal=shared_key=fluentd-receiver
  4. Refresh the fluentd Pods to apply the secure-forward secret and secure-forward configmap:

    $ oc delete pod --selector logging-infra=fluentd
  5. Configure the external log aggregator to accept messages securely from OpenShift Container Platform.

Forwarding logs using the syslog protocol

You can use the syslog protocol to send a copy of your logs to an external syslog server, instead of the default Elasticsearch logstore. Note the following about this syslog protocol:

  • uses syslog protocol (RFC 3164), not RFC 5424;

  • does not support TLS and thus, is not secure;

  • does not provide Kubernetes metadata, systemd data, or other metadata.

This method for forwarding logs is deprecated in OpenShift Container Platform and will be replaced by the Log Forwarding API in a future release.

There are two versions of the syslog protocol:

  • out_syslog: The non-buffered implementation, which communicates through UDP, does not buffer data and writes out results immediately.

  • out_syslog_buffered: The buffered implementation, which communicates through TCP, buffers data into chunks.

To configure log forwarding using the syslog protocol, create a configuration file, called syslog.conf, with the information needed to forward the logs. Then use that file to create a configmap called syslog in the openshift-logging namespace, which OpenShift Container Platform uses when forwarding the logs. You are responsible to configure your syslog server to receive the logs from OpenShift Container Platform.

Starting with the OpenShift Container Platform 4.3, the process for using the syslog protocol has changed. You now need to create a configmap, as described below.

You can forward logs to multiple syslog servers by specifying separate <store> stanzas in the configuration file.

Sample syslog.conf
<store>
@type syslog_buffered (1)
remote_syslog rsyslogserver.openshift-logging.svc.cluster.local (2)
port 514 (3)
hostname ${hostname} (4)
remove_tag_prefix tag (5)
tag_key ident,systemd.u.SYSLOG_IDENTIFIER (6)
facility local0 (7)
severity info (8)
use_record true (9)
payload_key message (10)
</store>
1 The syslog protocol, either: syslog or syslog_buffered.
2 The fully qualified domain name (FQDN) or IP address of the syslog server.
3 The port number to connect on. Defaults to 514.
4 The name of the syslog server.
5 Removes the prefix from the tag. Defaults to '' (empty).
6 The field to set the syslog key.
7 The syslog log facility or source.
8 The syslog log severity.
9 Determines whether to use the severity and facility from the record if available.
10 Optional. The key to set the payload of the syslog message. Defaults to message.

Configuring the payload_key parameter prevents other parameters from being forwarded to the syslog.

Sample syslog configmap based on the sample syslog.conf
kind: configmap
apiVersion: v1
metadata:
  name: syslog
  namespace: openshift-logging
data:
  syslog.conf: |
    <store>
     @type syslog_buffered
     remote_syslog syslogserver.openshift-logging.svc.cluster.local
     port 514
     hostname ${hostname}
     remove_tag_prefix tag
     tag_key ident,systemd.u.SYSLOG_IDENTIFIER
     facility local0
     severity info
     use_record true
     payload_key message
    </store>
Procedure

To configure OpenShift Container Platform to forward logs using the syslog protocol:

  1. Create a configuration file named syslog.conf that contains the following parameters within the <store> stanza:

    1. Specify the syslog protocol type:

      @type syslog_buffered (1)
      1 Specify the protocol to use, either: syslog or syslog_buffered.
    2. Configure the name, host, and port for your external syslog server:

      remote_syslog <remote> (1)
      port <number> (2)
      hostname <name> (3)
      1 Specify the FQDN or IP address of the syslog server.
      2 Specify the port of the syslog server.
      3 Specify a name for this syslog server.

      For example:

      remote_syslog syslogserver.openshift-logging.svc.cluster.local
      port 514
      hostname fluentd-server
    3. Configure the other syslog variables as needed:

      remove_tag_prefix (1)
      tag_key <key> (2)
      facility <value>  (3)
      severity <value>  (4)
      use_record <value> (5)
      payload_key message (6)
      1 Add this parameter to remove the tag field from the syslog prefix.
      2 Specify the field to set the syslog key.
      3 Specify the syslog log facility or source. For values, see RTF 3164.
      4 Specify the syslog log severity. For values, see link:RTF 3164.
      5 Specify true to use the severity and facility from the record if available. If true, the container_name, namespace_name, and pod_name are included in the output content.
      6 Specify the key to set the payload of the syslog message. Defaults to message.

      For example:

      facility local0
      severity info

      The configuration file appears similar to the following:

      <store>
      @type syslog_buffered
      remote_syslog syslogserver.openshift-logging.svc.cluster.local
      port 514
      hostname ${hostname}
      tag_key ident,systemd.u.SYSLOG_IDENTIFIER
      facility local0
      severity info
      use_record false
      </store>
  2. Create a configmap named syslog in the openshift-logging namespace from the configuration file:

    $ oc create configmap syslog --from-file=syslog.conf -n openshift-logging

    The Cluster Logging Operator redeploys the Fluentd Pods. If the Pods do not redeploy, you can delete the Fluentd Pods to force them to redeploy.

    $ oc delete pod --selector logging-infra=fluentd

Forwarding logs using the Log Forwarding API

The Log Forwarding API enables you to configure custom pipelines to send container and node logs to specific endpoints within or outside of your cluster. You can send logs by type to the internal OpenShift Container Platform Elasticsearch instance and to remote destinations not managed by OpenShift Container Platform cluster logging, such as an existing logging service, an external Elasticsearch cluster, external log aggregation solutions, or a Security Information and Event Management (SIEM) system.

The Log Fowarding API is currently a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

You can send different types of logs to different systems allowing you to control who in your organization can access each type. Optional TLS support ensures that you can send logs using secure communication as required by your organization.

Using the Log Forwarding API is optional. If you want to forward logs to only the internal OpenShift Container Platform Elasticsearch instance, do not configure the Log Forwarding API.

Understanding the Log Forwarding API

Forwarding cluster logs using the Log Forwarding API requires a combination of outputs and pipelines to send logs to specific endpoints inside and outside of your OpenShift Container Platform cluster.

If you want to use only the default internal OpenShift Container Platform Elasticsearch logstore, do not configure the Log Forwarding feature.

By default, the Cluster Logging Operator sends logs to the default internal Elasticsearch logstore, as defined in the ClusterLogging custom resource. To use the Log Forwarding feature, you create a custom logforwarding configuration file to send logs to endpoints you specify.

An output is the destination for log data and a pipeline defines simple routing for one source to one or more outputs.

An output can be either:

  • elasticsearch to forward logs to an external Elasticsearch v5.x cluster and/or the internal OpenShift Container Platform Elasticsearch instance.

  • forward to forward logs to an external log aggregation solution. This option uses the Fluentd forward protocols.

The endpoint must be a server name or FQDN, not an IP Address, if the cluster-wide proxy using the CIDR annotation is enabled.

A pipeline associates the source of the data to an output. The source of the data is one of the following:

  • logs.app - Container logs generated by user applications running in the cluster, except infrastructure container applications.

  • logs.infra - Logs generated by infrastructure components running in the cluster and OpenShift Container Platform nodes, such as journal logs. Infrastructure components are pods that run in the openshift*, kube*, or default projects.

  • logs.audit - Logs generated by the node audit system (auditd), which are stored in the /var/log/audit/audit.log file, and the audit logs from the Kubernetes apiserver and the OpenShift apiserver.

Note the following:

  • The internal OpenShift Container Platform Elasticsearch instance does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OpenShift Container Platform cluster logging does not comply with those regulations.

  • An output supports TLS communication using a secret. Secrets must have keys of: tls.crt, tls.key, and ca-bundler.crt which point to the respective certificates for which they represent. Secrets must have the key shared_key for use when using forward in a secure manner.

  • You are responsible to create and maintain any additional configurations that external destinations might require, such as keys and secrets, service accounts, port opening, or global proxy configuration.

The following example creates three outputs:

  • the internal OpenShift Container Platform Elasticsearch instance,

  • an unsecured externally-managed Elasticsearch instance,

  • a secured external log aggregator using the forward protocols.

Three pipelines send:

  • the application logs to the internal OpenShift Container Platform Elasticsearch,

  • the infrastructure logs to an external Elasticsearch instance,

  • the audit logs to the secured device over the forward protocols.

Sample log forwarding outputs and pipelines
apiVersion: "logging.openshift.io/v1alpha1"
kind: "LogForwarding"
metadata:
  name: instance (1)
  namespace: openshift-logging
spec:
  disableDefaultForwarding: true (2)
  outputs: (3)
   - name: elasticsearch (4)
     type: "elasticsearch"  (5)
     endpoint: elasticsearch.openshift-logging.svc:9200 (6)
     secret: (7)
        name: fluentd
   - name: elasticsearch-insecure
     type: "elasticsearch"
     endpoint: elasticsearch-insecure.svc.messaging.cluster.local
     insecure: true (8)
   - name: secureforward-offcluster
     type: "forward"
     endpoint: https://secureforward.offcluster.com:24224
     secret:
        name: secureforward
  pipelines: (9)
   - name: container-logs (10)
     inputSource: logs.app (11)
     outputRefs: (12)
     - elasticsearch
     - secureforward-offcluster
   - name: infra-logs
     inputSource: logs.infra
     outputRefs:
     - elasticsearch-insecure
   - name: audit-logs
     inputSource: logs.audit
     outputRefs:
     - secureforward-offcluster
1 The name of the log forwarding CR must be instance.
2 Parameter to disable the default log forwarding behavior.
3 Configuration for the outputs.
4 A name to describe the output.
5 The type of output, either elasticsearch or forward.
6 Enter the endpoint, either the server name, FQDN, or IP address. If the cluster-wide proxy using the CIDR annotation is enabled, the endpoint must be a server name or FQDN, not an IP Address. For the internal OpenShift Container Platform Elasticsearch instance, specify elasticsearch.openshift-logging.svc:9200.
7 Optional name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project.
8 Optional setting if the endpoint does not use a secret, resulting in insecure communication.
9 Configuration for the pipelines.
10 A name to describe the pipeline.
11 The data source: logs.app, logs.infra, or logs.audit.
12 The name of one or more outputs configured in the CR.

Fluentd log handling when the external log aggregator is unavailable

If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. OpenShift Container Platform rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.

Enabling the Log Forwarding API

You must enable the Log Forwarding API before you can forward logs using the API.

Procedure

To enable the Log Forwarding API:

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
  2. Add the clusterlogging.openshift.io/logforwardingtechpreview annotation and set to enabled:

    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      annotations:
        clusterlogging.openshift.io/logforwardingtechpreview: enabled (1)
      name: "instance"
      namespace: "openshift-logging"
    spec:
    
    ...
    
      collection: (2)
        logs:
          type: "fluentd"
          fluentd: {}
    1 Enables and disables the Log Forwarding API. Set to enabled to use log forwarding. To use the only the OpenShift Container Platform Elasticsearch instance, set to disabled or do not add the annotation.
    2 The spec.collection block must be defined to use Fluentd in the ClusterLogging CR.

Configuring log forwarding using the Log Forwarding API

To configure the Log Forwarding, edit the ClusterLogging custom resource (CR) to add the clusterlogging.openshift.io/logforwardingtechpreview: enabled annotation and create a `LogForwarding`to specify the outputs, pipelines, and enable log forwarding.

If you enable Log Forwarding, you should define a pipeline all for three source types: logs.app, logs.infra, and logs.audit. The logs from any undefined source type are dropped. For example, if you specify a pipeline for the logs.app and log-audit types, but do not specify a pipeline for the logs.infra type, logs.infra logs are dropped.

Procedure

To configure log forwarding using the API:

  1. Create a Log Forwarding CR YAML file similar to the following:

    apiVersion: "logging.openshift.io/v1alpha1"
    kind: "LogForwarding"
    metadata:
      name: instance (1)
      namespace: openshift-logging (2)
    spec:
      disableDefaultForwarding: true (3)
      outputs: (4)
       - name: elasticsearch
         type: "elasticsearch"
         endpoint: elasticsearch.openshift-logging.svc:9200
         secret:
            name: fluentd
       - name: elasticsearch-insecure
         type: "elasticsearch"
         endpoint: elasticsearch-insecure.svc.messaging.cluster.local
         insecure: true
       - name: secureforward-offcluster
         type: "forward"
         endpoint: https://secureforward.offcluster.com:24224
         secret:
            name: secureforward
      pipelines: (5)
       - name: container-logs
         inputSource: logs.app
         outputRefs:
         - elasticsearch
         - secureforward-offcluster
       - name: infra-logs
         inputSource: logs.infra
         outputRefs:
         - elasticsearch-insecure
       - name: audit-logs
         inputSource: logs.audit
         outputRefs:
         - secureforward-offcluster
    1 The name of the log forwarding CR must be instance.
    2 The namespace for the log forwarding CR must be openshift-logging.
    3 Set to true disable the default log forwarding behavior.
    4 Add one or more endpoints:
    • Specify the type of output, either elasticsearch or forward.

    • Enter a name for the output.

    • Enter the endpoint, either the server name, FQDN, or IP address. If the cluster-wide proxy using the CIDR annotation is enabled, the endpoint must be a server name or FQDN, not an IP Address. For the internal OpenShift Container Platform Elasticsearch instance, specify elasticsearch.openshift-logging.svc:9200.

    • Optional: Enter the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project.

    • Specify insecure: true if the endpoint does not use a secret, resulting in insecure communication.

    5 Add one or more pipelines:
    • Enter a name for the pipeline

    • Specify the source type: logs.app, logs.infra, or logs.audit.

    • Specify the name of one or more outputs configured in the CR.

      If you set disableDefaultForwarding: true you must configure a pipeline and output for all three types of logs, application, infrastructure, and audit. If you do not specify a pipeline and output for a log type, those logs are not stored and will be lost.

  2. Create the CR object:

    $ oc create -f <file-name>.yaml

Example log forwarding custom resources

A typical Log Forwarding configuration would be similar to the following examples.

The following Log Forwarding custom resource sends all logs to a secured external Elasticsearch logstore:

Sample custom resource to forward to an Elasticsearch logstore
apiVersion: logging.openshift.io/v1alpha1
kind: LogForwarding
metadata:
  name: instance
  namespace: openshift-logging
spec:
  disableDefaultForwarding: true
  outputs:
    - name: user-created-es
      type: elasticsearch
      endpoint: 'elasticsearch-server.openshift-logging.svc:9200'
      secret:
        name: piplinesecret
  pipelines:
    - name: app-pipeline
      inputSource: logs.app
      outputRefs:
        - user-created-es
    - name: infra-pipeline
      inputSource: logs.infra
      outputRefs:
        - user-created-es
    - name: audit-pipeline
      inputSource: logs.audit
      outputRefs:
        - user-created-es

The following Log Forwarding custom resource sends all logs to a secured Fluentd instance using the Fluentd forward protocol.

Sample custom resource to use the forward protocol
apiVersion: logging.openshift.io/v1alpha1
kind: LogForwarding
metadata:
  name: instance
  namespace: openshift-logging
spec:
  disableDefaultForwarding: true
  outputs:
    - name: fluentd-created-by-user
      type: forward
      endpoint: 'fluentdserver.openshift-logging.svc:24224'
      secret:
        name: fluentdserver
  pipelines:
    - name: app-pipeline
      inputSource: logs.app
      outputRefs:
        - fluentd-created-by-user
    - name: infra-pipeline
      inputSource: logs.infra
      outputRefs:
        - fluentd-created-by-user
    - name: clo-default-audit-pipeline
      inputSource: logs.audit
      outputRefs:
        - fluentd-created-by-user

Disabling the Log Forwarding API

To disable the Log Forwarding API and to stop forwarding logs to the speified endpoints, remove the metadata.annotations.clusterlogging.openshift.io/logforwardingtechpreview:enabled parameter from the ClusterLogging CR and delete the Log Forwarding CR. The container and node logs will be forwarded to the internal OpenShift Container Platform Elasticsearch instance.

Setting disableDefaultForwarding=false prevents cluster logging from sending logs to the specified endpoints and to default internal OpenShift Container Platform Elasticsearch instance.

Procedure

To disable the Log Forwarding API:

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
  2. Remove the clusterlogging.openshift.io/logforwardingtechpreview annotation:

    apiVersion: "logging.openshift.io/v1"
    kind: "ClusterLogging"
    metadata:
      annotations:
        clusterlogging.openshift.io/logforwardingtechpreview: enabled (1)
      name: "instance"
      namespace: "openshift-logging"
    ....
    1 Remove this annotation.
  3. Delete the Log Forwarding Custom Resource:

    $ oc delete LogForwarding instance -n openshift-logging