This is a cache of https://docs.openshift.com/container-platform/4.14/observability/otel/otel-sending-traces-and-metrics-to-otel-collector.html. It is a snapshot of the page at 2024-11-27T16:01:52.570+0000.
Sending traces and metrics to the Collector - Red Hat build of OpenTelemetry | Observability | OpenShift Container Platform 4.14
×

You can set up and use the Red Hat build of OpenTelemetry to send traces to the OpenTelemetry Collector or the TempoStack instance.

Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection.

Sending traces and metrics to the OpenTelemetry Collector with sidecar injection

You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection.

The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector.

Prerequisites
  • The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed.

  • You have access to the cluster through the web console or the OpenShift CLI (oc):

    • You are logged in to the web console as a cluster administrator with the cluster-admin role.

    • An active OpenShift CLI (oc) session by a cluster administrator with the cluster-admin role.

    • For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

Procedure
  1. Create a project for an OpenTelemetry Collector instance.

    apiVersion: project.openshift.io/v1
    kind: Project
    metadata:
      name: observability
  2. Create a service account.

    apiVersion: v1
    kind: serviceAccount
    metadata:
      name: otel-collector-sidecar
      namespace: observability
  3. Grant the permissions to the service account for the k8sattributes and resourcedetection processors.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector
    rules:
    - apiGroups: ["", "config.openshift.io"]
      resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector
    subjects:
    - kind: serviceAccount
      name: otel-collector-sidecar
      namespace: observability
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io
  4. Deploy the OpenTelemetry Collector as a sidecar.

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: observability
    spec:
      serviceAccount: otel-collector-sidecar
      mode: sidecar
      config: |
        serviceAccount: otel-collector-sidecar
        receivers:
          otlp:
            protocols:
              grpc: {}
              http: {}
        processors:
          batch: {}
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
            timeout: 2s
        exporters:
          otlp:
            endpoint: "tempo-<example>-gateway:8090" (1)
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [jaeger]
              processors: [memory_limiter, resourcedetection, batch]
              exporters: [otlp]
    1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator.
  5. Create your deployment using the otel-collector-sidecar service account.

  6. Add the sidecar.opentelemetry.io/inject: "true" annotation to your Deployment object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance.

Sending traces and metrics to the OpenTelemetry Collector without sidecar injection

You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables.

Prerequisites
  • The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed.

  • You have access to the cluster through the web console or the OpenShift CLI (oc):

    • You are logged in to the web console as a cluster administrator with the cluster-admin role.

    • An active OpenShift CLI (oc) session by a cluster administrator with the cluster-admin role.

    • For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

Procedure
  1. Create a project for an OpenTelemetry Collector instance.

    apiVersion: project.openshift.io/v1
    kind: Project
    metadata:
      name: observability
  2. Create a service account.

    apiVersion: v1
    kind: serviceAccount
    metadata:
      name: otel-collector-deployment
      namespace: observability
  3. Grant the permissions to the service account for the k8sattributes and resourcedetection processors.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-collector
    rules:
    - apiGroups: ["", "config.openshift.io"]
      resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
      verbs: ["get", "watch", "list"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-collector
    subjects:
    - kind: serviceAccount
      name: otel-collector-deployment
      namespace: observability
    roleRef:
      kind: ClusterRole
      name: otel-collector
      apiGroup: rbac.authorization.k8s.io
  4. Deploy the OpenTelemetry Collector instance with the OpenTelemetryCollector custom resource.

    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    metadata:
      name: otel
      namespace: observability
    spec:
      mode: deployment
      serviceAccount: otel-collector-deployment
      config: |
        receivers:
          jaeger:
            protocols:
              grpc: {}
              thrift_binary: {}
              thrift_compact: {}
              thrift_http: {}
          opencensus:
          otlp:
            protocols:
              grpc: {}
              http: {}
          zipkin: {}
        processors:
          batch: {}
          k8sattributes: {}
          memory_limiter:
            check_interval: 1s
            limit_percentage: 50
            spike_limit_percentage: 30
          resourcedetection:
            detectors: [openshift]
        exporters:
          otlp:
            endpoint: "tempo-<example>-distributor:4317" (1)
            tls:
              insecure: true
        service:
          pipelines:
            traces:
              receivers: [jaeger, opencensus, otlp, zipkin]
              processors: [memory_limiter, k8sattributes, resourcedetection, batch]
              exporters: [otlp]
    1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator.
  5. Set the environment variables in the container with your instrumented application.

    Name Description Default value

    OTEL_service_NAME

    Sets the value of the service.name resource attribute.

    ""

    OTEL_EXPORTER_OTLP_ENDPOINT

    Base endpoint URL for any signal type with an optionally specified port number.

    https://localhost:4317

    OTEL_EXPORTER_OTLP_CERTIFICATE

    Path to the certificate file for the TLS credentials of the gRPC client.

    https://localhost:4317

    OTEL_TRACES_SAMPLER

    Sampler to be used for traces.

    parentbased_always_on

    OTEL_EXPORTER_OTLP_PROTOCOL

    Transport protocol for the OTLP exporter.

    grpc

    OTEL_EXPORTER_OTLP_TIMEOUT

    Maximum time interval for the OTLP exporter to wait for each batch export.

    10s

    OTEL_EXPORTER_OTLP_INSECURE

    Disables client transport security for gRPC requests. An HTTPS schema overrides it.

    False