This is a cache of https://docs.openshift.com/container-platform/4.12/observability/otel/otel-collector/otel-collector-processors.html. It is a snapshot of the page at 2024-11-29T12:23:38.876+0000.
Processors - Red Hat build of OpenTelemetry | Observability | OpenShift Container Platform 4.12
×

Processors process the data between it is received and exported. Processors are optional. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters.

Batch Processor

The Batch Processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information.

Example of the OpenTelemetry Collector custom resource when using the Batch Processor
# ...
  config: |
    processor:
      batch:
        timeout: 5s
        send_batch_max_size: 10000
    service:
      pipelines:
        traces:
          processors: [batch]
        metrics:
          processors: [batch]
# ...
Table 1. Parameters used by the Batch Processor
Parameter Description Default

timeout

Sends the batch after a specific time duration and irrespective of the batch size.

200ms

send_batch_size

Sends the batch of telemetry data after the specified number of spans or metrics.

8192

send_batch_max_size

The maximum allowable size of the batch. Must be equal or greater than the send_batch_size.

0

metadata_keys

When activated, a batcher instance is created for each unique set of values found in the client.Metadata.

[]

metadata_cardinality_limit

When the metadata_keys are populated, this configuration restricts the number of distinct metadata key-value combinations processed throughout the duration of the process.

1000

Memory Limiter Processor

The Memory Limiter Processor periodically checks the Collector’s memory usage and pauses data processing when the soft memory limit is reached. This processor supports traces, metrics, and logs. The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter Processor forces garbage collection to run.

Example of the OpenTelemetry Collector custom resource when using the Memory Limiter Processor
# ...
  config: |
    processor:
      memory_limiter:
        check_interval: 1s
        limit_mib: 4000
        spike_limit_mib: 800
    service:
      pipelines:
        traces:
          processors: [batch]
        metrics:
          processors: [batch]
# ...
Table 2. Parameters used by the Memory Limiter Processor
Parameter Description Default

check_interval

Time between memory usage measurements. The optimal value is 1s. For spiky traffic patterns, you can decrease the check_interval or increase the spike_limit_mib.

0s

limit_mib

The hard limit, which is the maximum amount of memory in MiB allocated on the heap. Typically, the total memory usage of the OpenTelemetry Collector is about 50 MiB greater than this value.

0

spike_limit_mib

Spike limit, which is the maximum expected spike of memory usage in MiB. The optimal value is approximately 20% of limit_mib. To calculate the soft limit, subtract the spike_limit_mib from the limit_mib.

20% of limit_mib

limit_percentage

Same as the limit_mib but expressed as a percentage of the total available memory. The limit_mib setting takes precedence over this setting.

0

spike_limit_percentage

Same as the spike_limit_mib but expressed as a percentage of the total available memory. Intended to be used with the limit_percentage setting.

0

Resource Detection Processor

The Resource Detection Processor identifies host resource details in alignment with OpenTelemetry’s resource semantic standards. Using the detected information, this processor can add or replace the resource values in telemetry data. This processor supports traces and metrics. You can use this processor with multiple detectors such as the Docket metadata detector or the OTEL_RESOURCE_ATTRIBUTES environment variable detector.

The Resource Detection Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenShift Container Platform permissions required for the Resource Detection Processor
kind: ClusterRole
metadata:
  name: otel-collector
rules:
- apiGroups: ["config.openshift.io"]
  resources: ["infrastructures", "infrastructures/status"]
  verbs: ["get", "watch", "list"]
# ...
OpenTelemetry Collector using the Resource Detection Processor
# ...
  config: |
    processor:
      resourcedetection:
        detectors: [openshift]
        override: true
    service:
      pipelines:
        traces:
          processors: [resourcedetection]
        metrics:
          processors: [resourcedetection]
# ...
OpenTelemetry Collector using the Resource Detection Processor with an environment variable detector
# ...
  config: |
    processors:
      resourcedetection/env:
        detectors: [env] (1)
        timeout: 2s
        override: false
# ...
1 Specifies which detector to use. In this example, the environment detector is specified.

Attributes Processor

The Attributes Processor can modify attributes of a span, log, or metric. You can configure this processor to filter and match input data and include or exclude such data for specific actions.

The Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

This processor operates on a list of actions, executing them in the order specified in the configuration. The following actions are supported:

Insert

Inserts a new attribute into the input data when the specified key does not already exist.

Update

Updates an attribute in the input data if the key already exists.

Upsert

Combines the insert and update actions: Inserts a new attribute if the key does not exist yet. Updates the attribute if the key already exists.

Delete

Removes an attribute from the input data.

Hash

Hashes an existing attribute value as SHA1.

Extract

Extracts values by using a regular expression rule from the input key to the target keys defined in the rule. If a target key already exists, it is overridden similarly to the Span Processor’s to_attributes setting with the existing attribute as the source.

Convert

Converts an existing attribute to a specified type.

OpenTelemetry Collector using the Attributes Processor
# ...
  config: |
    processors:
      attributes/example:
        actions:
          - key: db.table
            action: delete
          - key: redacted_span
            value: true
            action: upsert
          - key: copy_key
            from_attribute: key_original
            action: update
          - key: account_id
            value: 2245
            action: insert
          - key: account_password
            action: delete
          - key: account_email
            action: hash
          - key: http.status_code
            action: convert
            converted_type: int
# ...

Resource Processor

The Resource Processor applies changes to the resource attributes. This processor supports traces, metrics, and logs.

The Resource Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector using the Resource Detection Processor
# ...
  config: |
    processor:
      attributes:
      - key: cloud.availability_zone
        value: "zone-1"
        action: upsert
      - key: k8s.cluster.name
        from_attribute: k8s-cluster
        action: insert
      - key: redundant-attribute
        action: delete
# ...

Attributes represent the actions that are applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute.

Span Processor

The Span Processor modifies the span name based on its attributes or extracts the span attributes from the span name. This processor can also change the span status and include or exclude spans. This processor supports traces.

Span renaming requires specifying attributes for the new name by using the from_attributes configuration.

The Span Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector using the Span Processor for renaming a span
# ...
  config: |
    processor:
      span:
        name:
          from_attributes: [<key1>, <key2>, ...] (1)
          separator: <value> (2)
# ...
1 Defines the keys to form the new span name.
2 An optional separator.

You can use this processor to extract attributes from the span name.

OpenTelemetry Collector using the Span Processor for extracting attributes from a span name
# ...
  config: |
    processor:
      span/to_attributes:
        name:
          to_attributes:
            rules:
              - ^\/api\/v1\/document\/(?P<documentId>.*)\/update$ (1)
# ...
1 This rule defines how the extraction is to be executed. You can define more rules: for example, in this case, if the regular expression matches the name, a documentID attibute is created. In this example, if the input span name is /api/v1/document/12345678/update, this results in the /api/v1/document/{documentId}/update output span name, and a new "documentId"="12345678" attribute is added to the span.

You can have the span status modified.

OpenTelemetry Collector using the Span Processor for status change
# ...
  config: |
    processor:
      span/set_status:
        status:
          code: Error
          description: "<error_description>"
# ...

Kubernetes Attributes Processor

The Kubernetes Attributes Processor enables automatic configuration of spans, metrics, and log resource attributes by using the Kubernetes metadata. This processor supports traces, metrics, and logs. This processor automatically identifies the Kubernetes resources, extracts the metadata from them, and incorporates this extracted metadata as resource attributes into relevant spans, metrics, and logs. It utilizes the Kubernetes API to discover all pods operating within a cluster, maintaining records of their IP addresses, pod UIDs, and other relevant metadata.

The Kubernetes Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Minimum OpenShift Container Platform permissions required for the Kubernetes Attributes Processor
kind: ClusterRole
metadata:
  name: otel-collector
rules:
  - apiGroups: ['']
    resources: ['pods', 'namespaces']
    verbs: ['get', 'watch', 'list']
# ...
OpenTelemetry Collector using the Kubernetes Attributes Processor
# ...
  config: |
    processors:
         k8sattributes:
             filter:
                 node_from_env_var: KUBE_NODE_NAME
# ...

Filter Processor

The Filter Processor leverages the OpenTelemetry Transformation Language to establish criteria for discarding telemetry data. If any of these conditions are satisfied, the telemetry data are discarded. You can combine the conditions by using the logical OR operator. This processor supports traces, metrics, and logs.

The Filter Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled OTLP Exporter
# ...
config: |
  processors:
    filter/ottl:
      error_mode: ignore (1)
      traces:
        span:
          - 'attributes["container.name"] == "app_container_1"' (2)
          - 'resource.attributes["host.name"] == "localhost"' (3)
# ...
1 Defines the error mode. When set to ignore, ignores errors returned by conditions. When set to propagate, returns the error up the pipeline. An error causes the payload to be dropped from the Collector.
2 Filters the spans that have the container.name == app_container_1 attribute.
3 Filters the spans that have the host.name == localhost resource attribute.

Routing Processor

The Routing Processor routes logs, metrics, or traces to specific exporters. This processor can read a header from an incoming gRPC or plain HTTP request or read a resource attribute, and then direct the trace information to relevant exporters according to the read value.

The Routing Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenTelemetry Collector custom resource with an enabled OTLP Exporter
# ...
config: |
  processors:
    routing:
      from_attribute: X-Tenant (1)
      default_exporters: (2)
      - jaeger
      table: (3)
      - value: acme
        exporters: [jaeger/acme]
  exporters:
    jaeger:
      endpoint: localhost:14250
    jaeger/acme:
      endpoint: localhost:24250
# ...
1 The HTTP header name for the lookup value when performing the route.
2 The default exporter when the attribute value is not present in the table in the next section.
3 The table that defines which values are to be routed to which exporters.

Optionally, you can create an attribute_source configuration, which defines where to look for the attribute that you specify in the from_attribute field. The supported values are context for searching the context including the HTTP headers, and resource for searching the resource attributes.

Cumulative-to-Delta Processor

The Cumulative-to-Delta Processor processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics.

You can filter metrics by using the include: or exclude: fields and specifying the strict or regexp metric name matching.

This processor does not convert non-monotonic sums and exponential histograms.

The Cumulative-to-Delta Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Example of an OpenTelemetry Collector custom resource with an enabled Cumulative-to-Delta Processor
# ...
config: |
  processors:
    cumulativetodelta:
      include: (1)
        match_type: strict (2)
        metrics: (3)
        - <metric_1_name>
        - <metric_2_name>
      exclude: (4)
        match_type: regexp
        metrics:
        - "<regular_expression_for_metric_names>"
# ...
1 Optional: Configures which metrics to include. When omitted, all metrics, except for those listed in the exclude field, are converted to delta metrics.
2 Defines a value provided in the metrics field as a strict exact match or regexp regular expression.
3 Lists the metric names, which are exact matches or matches for regular expressions, of the metrics to be converted to delta metrics. If a metric matches both the include and exclude filters, the exclude filter takes precedence.
4 Optional: Configures which metrics to exclude. When omitted, no metrics are excluded from conversion to delta metrics.

Group-by-Attributes Processor

The Group-by-Attributes Processor groups all spans, log records, and metric datapoints that share the same attributes by reassigning them to a Resource that matches those attributes.

The Group-by-Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

At minimum, configuring this processor involves specifying an array of attribute keys to be used to group spans, log records, or metric datapoints together, as in the following example:

# ...
processors:
  groupbyattrs:
    keys: (1)
      - <key1> (2)
      - <key2>
# ...
1 Specifies attribute keys to group by.
2 If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated.

Transform Processor

The Transform Processor enables modification of telemetry data according to specified rules and in the OpenTelemetry Transformation Language (OTTL). For each signal type, the processor processes a series of conditions and statements associated with a specific OTTL Context type and then executes them in sequence on incoming telemetry data as specified in the configuration. Each condition and statement can access and modify telemetry data by using various functions, allowing conditions to dictate if a function is to be executed.

All statements are written in the OTTL. You can configure multiple context statements for different signals, traces, metrics, and logs. The value of the context type specifies which OTTL Context the processor must use when interpreting the associated statements.

The Transform Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Configuration summary
# ...
config: |
  processors:
    transform:
      error_mode: ignore (1)
      <trace|metric|log>_statements: (2)
        - context: <string> (3)
          conditions:  (4)
            - <string>
            - <string>
          statements: (5)
            - <string>
            - <string>
            - <string>
        - context: <string>
          statements:
            - <string>
            - <string>
            - <string>
# ...
1 Optional: See the following table "Values for the optional error_mode field".
2 Indicates a signal to be transformed.
3 See the following table "Values for the context field".
4 Optional: Conditions for performing a transformation.
Configuration example
# ...
config: |
  transform:
    error_mode: ignore
    trace_statements: (1)
      - context: resource
        statements:
          - keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"]) (2)
          - replace_pattern(attributes["process.command_line"], "password\\=[^\\s]*(\\s?)", "password=***") (3)
          - limit(attributes, 100, [])
          - truncate_all(attributes, 4096)
      - context: span (4)
        statements:
          - set(status.code, 1) where attributes["http.path"] == "/health"
          - set(name, attributes["http.route"])
          - replace_match(attributes["http.target"], "/user/*/list/*", "/user/{userId}/list/{listId}")
          - limit(attributes, 100, [])
          - truncate_all(attributes, 4096)
# ...
1 Transforms a trace signal.
2 Keeps keys on the resources.
3 Replaces attributes and replaces string characters in password fields with asterisks.
4 Performs transformations at the span level.
Table 3. Values for the context field
Signal Statement Valid Contexts

trace_statements

resource, scope, span, spanevent

metric_statements

resource, scope, metric, datapoint

log_statements

resource, scope, log

Table 4. Values for the optional error_mode field
Value Description

ignore

Ignores and logs errors returned by statements and then continues to the next statement.

silent

Ignores and doesn’t log errors returned by statements and then continues to the next statement.

propagate

Returns errors up the pipeline and drops the payload. Implicit default.