This is a cache of https://docs.openshift.com/container-platform/4.14/observability/otel/otel-configuring-metrics-for-monitoring-stack.html. It is a snapshot of the page at 2024-11-25T16:18:54.814+0000.
Configuring metrics for the monitoring stack - Red Hat build of OpenTelemetry | Observability | OpenShift Container Platform 4.14
×

As a cluster administrator, you can configure the OpenTelemetry Collector custom resource (CR) to perform the following tasks:

  • Create a Prometheus ServiceMonitor CR for scraping the Collector’s pipeline metrics and the enabled Prometheus exporters.

  • Configure the Prometheus receiver to scrape metrics from the in-cluster monitoring stack.

Configuration for sending metrics to the monitoring stack

One of two following custom resources (CR) configures the sending of metrics to the monitoring stack:

  • OpenTelemetry Collector CR

  • Prometheus PodMonitor CR

A configured OpenTelemetry Collector CR can create a Prometheus ServiceMonitor CR for scraping the Collector’s pipeline metrics and the enabled Prometheus exporters.

Example of the OpenTelemetry Collector CR with the Prometheus exporter
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
spec:
  mode: deployment
  observability:
    metrics:
      enableMetrics: true (1)
  config: |
    exporters:
      prometheus:
        endpoint: 0.0.0.0:8889
        resource_to_telemetry_conversion:
          enabled: true # by default resource attributes are dropped
    service:
      telemetry:
        metrics:
          address: ":8888"
      pipelines:
        metrics:
          receivers: [otlp]
          exporters: [prometheus]
1 Configures the Operator to create the Prometheus ServiceMonitor CR to scrape the Collector’s internal metrics endpoint and Prometheus exporter metric endpoints. The metrics will be stored in the OpenShift monitoring stack.

Alternatively, a manually created Prometheus PodMonitor CR can provide fine control, for example removing duplicated labels added during Prometheus scraping.

Example of the PodMonitor CR that configures the monitoring stack to scrape the Collector metrics
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: otel-collector
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: <cr_name>-collector (1)
  podMetricsEndpoints:
  - port: metrics (2)
  - port: promexporter (3)
    relabelings:
    - action: labeldrop
      regex: pod
    - action: labeldrop
      regex: container
    - action: labeldrop
      regex: endpoint
    metricRelabelings:
    - action: labeldrop
      regex: instance
    - action: labeldrop
      regex: job
1 The name of the OpenTelemetry Collector CR.
2 The name of the internal metrics port for the OpenTelemetry Collector. This port name is always metrics.
3 The name of the Prometheus exporter port for the OpenTelemetry Collector.

Configuration for receiving metrics from the monitoring stack

A configured OpenTelemetry Collector custom resource (CR) can set up the Prometheus receiver to scrape metrics from the in-cluster monitoring stack.

Example of the OpenTelemetry Collector CR for scraping metrics from the in-cluster monitoring stack
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otel-collector
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-monitoring-view (1)
subjects:
  - kind: ServiceAccount
    name: otel-collector
    namespace: observability
---
kind: configmap
apiVersion: v1
metadata:
  name: cabundle
  namespce: observability
  annotations:
    service.beta.openshift.io/inject-cabundle: "true" (2)
---
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel
  namespace: observability
spec:
  volumeMounts:
    - name: cabundle-volume
      mountPath: /etc/pki/ca-trust/source/service-ca
      readOnly: true
  volumes:
    - name: cabundle-volume
      configmap:
        name: cabundle
  mode: deployment
  config: |
    receivers:
      prometheus: (3)
        config:
          scrape_configs:
            - job_name: 'federate'
              scrape_interval: 15s
              scheme: https
              tls_config:
                ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt
              bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
              honor_labels: false
              params:
                'match[]':
                  - '{__name__="<metric_name>"}' (4)
              metrics_path: '/federate'
              static_configs:
                - targets:
                  - "prometheus-k8s.openshift-monitoring.svc.cluster.local:9091"
    exporters:
      debug: (5)
        verbosity: detailed
    service:
      pipelines:
        metrics:
          receivers: [prometheus]
          processors: []
          exporters: [debug]
1 Assigns the cluster-monitoring-view cluster role to the service account of the OpenTelemetry Collector so that it can access the metrics data.
2 Injects the OpenShift service CA for configuring the TLS in the Prometheus receiver.
3 Configures the Prometheus receiver to scrape the federate endpoint from the in-cluster monitoring stack.
4 Uses the Prometheus query language to select the metrics to be scraped. See the in-cluster monitoring documentation for more details and limitations of the federate endpoint.
5 Configures the debug exporter to print the metrics to the standard output.