This is a cache of https://docs.openshift.com/container-platform/4.1/logging/config/efk-logging-external.html. It is a snapshot of the page at 2024-11-23T02:54:26.184+0000.
Sending logs to external devices - Configuring your cluster logging <strong>deployment</strong> | Logging | OpenShift Container Platform 4.1
×

You can send Elasticsearch logs to external devices, such as an externally-hosted Elasticsearch instance or an external syslog server. You can also configure Fluentd to send logs to an external log aggregator.

You must set cluster logging to Unmanaged state before performing these configurations, unless otherwise noted. For more information, see Changing the cluster logging management state.

Configuring Fluentd to send logs to an external Elasticsearch instance

Fluentd sends logs to the value of the ES_HOST, ES_PORT, OPS_HOST, and OPS_PORT environment variables of the Elasticsearch deployment configuration. The application logs are directed to the ES_HOST destination, and operations logs to OPS_HOST.

Sending logs directly to an AWS Elasticsearch instance is not supported. Use Fluentd Secure Forward to direct logs to an instance of Fluentd that you control and that is configured with the fluent-plugin-aws-elasticsearch-service plug-in.

Prerequisite
  • Cluster logging and Elasticsearch must be installed.

  • Set cluster logging to the unmanaged state.

Procedure

To direct logs to a specific Elasticsearch instance:

  1. Edit the fluentd DaemonSet in the openshift-logging project:

    $ oc edit ds/fluentd
    
    spec:
      template:
        spec:
          containers:
              env:
              - name: ES_HOST
                value: elasticsearch
              - name: ES_PORT
                value: '9200'
              - name: ES_CLIENT_CERT
                value: /etc/fluent/keys/app-cert
              - name: ES_CLIENT_KEY
                value: /etc/fluent/keys/app-key
              - name: ES_CA
                value: /etc/fluent/keys/app-ca
              - name: OPS_HOST
                value: elasticsearch
              - name: OPS_PORT
                value: '9200'
              - name: OPS_CLIENT_CERT
                value: /etc/fluent/keys/infra-cert
              - name: OPS_CLIENT_KEY
                value: /etc/fluent/keys/infra-key
              - name: OPS_CA
                value: /etc/fluent/keys/infra-ca
  2. Set ES_HOST and OPS_HOST to the same destination, while ensuring that ES_PORT and OPS_PORT also have the same value for an external Elasticsearch instance to contain both application and operations logs.

  3. Configure your externally-hosted Elasticsearch instance for TLS. Only externally-hosted Elasticsearch instances that use Mutual TLS are allowed.

If you are not using the provided Kibana and Elasticsearch images, you will not have the same multi-tenant capabilities and your data will not be restricted by user access to a particular project.

Configuring Fluentd to send logs to an external syslog server

Use the fluent-plugin-remote-syslog plug-in on the host to send logs to an external syslog server.

Prerequisite

Set cluster logging to the unmanaged state.

Procedure
  1. Set environment variables in the fluentd daemonset in the openshift-logging project:

    spec:
      template:
        spec:
          containers:
            - name: fluentd
              image: 'registry.redhat.io/openshift4/ose-logging-fluentd:v4.1'
              env:
                - name: REMOTE_SYSLOG_HOST (1)
                  value: host1
                - name: REMOTE_SYSLOG_HOST_BACKUP
                  value: host2
                - name: REMOTE_SYSLOG_PORT_BACKUP
                  value: 5555
    1 The desired remote syslog host. Required for each host.

    This will build two destinations. The syslog server on host1 will be receiving messages on the default port of 514, while host2 will be receiving the same messages on port 5555.

  2. Alternatively, you can configure your own custom the fluentd daemonset in the openshift-logging project.

    Fluentd Environment Variables

    Parameter Description

    USE_REMOTE_SYSLOG

    Defaults to false. Set to true to enable use of the fluent-plugin-remote-syslog gem

    REMOTE_SYSLOG_HOST

    (Required) Hostname or IP address of the remote syslog server.

    REMOTE_SYSLOG_PORT

    Port number to connect on. Defaults to 514.

    REMOTE_SYSLOG_SEVERITY

    Set the syslog severity level. Defaults to debug.

    REMOTE_SYSLOG_FACILITY

    Set the syslog facility. Defaults to local0.

    REMOTE_SYSLOG_USE_RECORD

    Defaults to false. Set to true to use the record’s severity and facility fields to set on the syslog message.

    REMOTE_SYSLOG_REMOVE_TAG_PREFIX

    Removes the prefix from the tag, defaults to '' (empty).

    REMOTE_SYSLOG_TAG_KEY

    If specified, uses this field as the key to look on the record, to set the tag on the syslog message.

    REMOTE_SYSLOG_PAYLOAD_KEY

    If specified, uses this field as the key to look on the record, to set the payload on the syslog message.

    REMOTE_SYSLOG_TYPE

    Set the transport layer protocol type. Defaults to syslog_buffered, which sets the TCP protocol. To switch to UDP, set this to syslog.

    This implementation is insecure, and should only be used in environments where you can guarantee no snooping on the connection.

Configuring Fluentd to send logs to an external log aggregator

You can configure Fluentd to send a copy of its logs to an external log aggregator, and not the default Elasticsearch, using the out_forward plug-in. From there, you can further process log records after the locally hosted Fluentd has processed them.

The forward plug-in is supported by Fluentd only. The out_forward plug-in implements the client side (sender) and the in_forward plug-in implements the server side (receiver).

To configure OpenShift Container Platform to send logs using out_forward, create a ConfigMap called secure-forward in the openshift-logging namespace that points to a receiver. On the receiver, configure the in_forward plug-in to receive the logs from OpenShift Container Platform. For more information on using the in_forward plug-in, see the Fluentd documentation.

Default secure-forward.conf section
# <store>
#   @type forward
#   <security>
#     self_hostname ${hostname} # ${hostname} is a placeholder.
#     shared_key <shared_key_between_forwarder_and_forwardee>
#   </security>
#   transport tls
#   tls_verify_hostname true           # Set false to ignore server cert hostname.

#   tls_cert_path /path/for/certificate/ca_cert.pem
#   <buffer>
#     @type file
#     path '/var/lib/fluentd/forward'
#     queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }"
#     chunk_limit_size "#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }"
#     flush_interval "#{ENV['FORWARD_FLUSH_INTERVAL'] || '5s'}"
#     flush_at_shutdown "#{ENV['FLUSH_AT_SHUTDOWN'] || 'false'}"
#     flush_thread_count "#{ENV['FLUSH_THREAD_COUNT'] || 2}"
#     retry_max_interval "#{ENV['FORWARD_RETRY_WAIT'] || '300'}"
#     retry_forever true
#     # the systemd journald 0.0.8 input plugin will just throw away records if the buffer
#     # queue limit is hit - 'block' will halt further reads and keep retrying to flush the
#     # buffer to the remote - default is 'exception' because in_tail handles that case
#     overflow_action "#{ENV['BUFFER_QUEUE_FULL_ACTION'] || 'exception'}"
#   </buffer>
#   <server>
#     host server.fqdn.example.com  # or IP
#     port 24284
#   </server>
#   <server>
#     host 203.0.113.8 # ip address to connect
#     name server.fqdn.example.com # The name of the server. Used for logging and certificate verification in TLS transport (when host is address).
#   </server>
# </store>
Procedure

To send a copy of Fluentd logs to an external log aggregator:

  1. Edit the secure-forward.conf section of the Fluentd configuration map:

    $ oc edit configmap/fluentd -n openshift-logging
  2. Enter the name, host, and port for your external Fluentd server:

    #   <server>
    #     host server.fqdn.example.com  # or IP
    #     port 24284
    #   </server>
    #   <server>
    #     host 203.0.113.8 # ip address to connect
    #     name server.fqdn.example.com # The name of the server. Used for logging and certificate verification in TLS transport (when host is address).
    #   </server>

    For example:

      <server>
        name externalserver1 (1)
        host 192.168.1.1 (2)
        port 24224 (3)
      </server>
      <server> (4)
        name externalserver1
        host 192.168.1.2
        port 24224
      </server>
    </store>
    1 Optionally, enter a name for this external aggregator.
    2 Specify the host name or IP of the external aggregator.
    3 Specify the port of the external aggregator.
    4 Optionally, add additional external aggregator.
  3. Add the path to your CA certificate and private key to the secure-forward.conf section:

    #   <security>
    #     self_hostname ${hostname} # ${hostname} is a placeholder. (1)
    #     shared_key <shared_key_between_forwarder_and_forwardee> (2)
    #   </security>
    
    #   tls_cert_path /path/for/certificate/ca_cert.pem (3)
    1 Specify the default value of the auto-generated certificate common name (CN).
    2 Specify a shared key for authentication.
    3 Specify the path to your CA certificate.

    For example:

       <security>
         self_hostname client.fqdn.local
         shared_key cluster_logging_key
       </security>
    
       tls_cert_path /etc/fluent/keys/ca.crt

    To use mTLS, see the Fluentd documentation for information about client certificate and key parameters and other settings.

  4. Add certificates to be used in secure-forward.conf to the existing secret that is mounted on the Fluentd pods. The your_ca_cert and your_private_key values must match what is specified in secure-forward.conf in configmap/logging-fluentd:

    $ oc patch secrets/fluentd --type=json \
      --patch "[{'op':'add','path':'/data/your_ca_cert','value':'$(base64 -w0 /path/to/your_ca_cert.pem)'}]"
    $ oc patch secrets/fluentd --type=json \
      --patch "[{'op':'add','path':'/data/your_private_key','value':'$(base64 -w0 /path/to/your_private_key.pem)'}]"

    Replace your_private_key with a generic name. This is a link to the JSON path, not a path on your host system.

    For example:

    $ oc patch secrets/fluentd --type=json \
      --patch "[{'op':'add','path':'/data/ca.crt','value':'$(base64 -w0 /etc/fluent/keys/ca.crt)'}]"
    $ oc patch secrets/fluentd --type=json \
      --patch "[{'op':'add','path':'/data/ext-agg','value':'$(base64 -w0 /etc/fluent/keys/ext-agg.pem)'}]"
  5. Configure the secure-forward.conf file on the external aggregator to accept messages securely from Fluentd.

    When configuring the external aggregator, it must be able to accept messages securely from Fluentd.

You can find further explanation of how to set up the inforward plugin and the out_forward plugin.