apiVersion: v1
kind: serviceAccount
metadata:
name: otel-collector-deployment
You can use the OpenTelemetry Collector to forward your telemetry data.
To configure forwarding traces to a TempoStack instance, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in Additional resources.
The Red Hat build of OpenTelemetry Operator is installed.
The Tempo Operator is installed.
A TempoStack instance is deployed on the cluster.
Create a service account for the OpenTelemetry Collector.
apiVersion: v1
kind: serviceAccount
metadata:
name: otel-collector-deployment
Create a cluster role for the service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector
rules:
(1)
(2)
- apiGroups: ["", "config.openshift.io"]
resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
1 | The k8sattributesprocessor requires permissions for pods and namespaces resources. |
2 | The resourcedetectionprocessor requires permissions for infrastructures and status. |
Bind the cluster role to the service account.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-collector
subjects:
- kind: serviceAccount
name: otel-collector-deployment
namespace: otel-collector-example
roleRef:
kind: ClusterRole
name: otel-collector
apiGroup: rbac.authorization.k8s.io
Create the YAML file to define the OpenTelemetryCollector
custom resource (CR).
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
spec:
mode: deployment
serviceAccount: otel-collector-deployment
config: |
receivers:
jaeger:
protocols:
grpc: {}
thrift_binary: {}
thrift_compact: {}
thrift_http: {}
opencensus: {}
otlp:
protocols:
grpc: {}
http: {}
zipkin: {}
processors:
batch: {}
k8sattributes: {}
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
resourcedetection:
detectors: [openshift]
exporters:
otlp:
endpoint: "tempo-simplest-distributor:4317" (1)
tls:
insecure: true
service:
pipelines:
traces:
receivers: [jaeger, opencensus, otlp, zipkin] (2)
processors: [memory_limiter, k8sattributes, resourcedetection, batch]
exporters: [otlp]
1 | The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, "tempo-simplest-distributor:4317" in this example, which is already created. |
2 | The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the GRPC protocol. |
You can deploy
|
You can deploy the OpenTelemetry Collector with Collector components to forward logs to a LokiStack instance.
This use of the Loki Exporter is a temporary Technology Preview feature that is planned to be replaced with the publication of an improved solution in which the Loki Exporter is replaced with the OTLP HTTP Exporter.
The Loki Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
The Red Hat build of OpenTelemetry Operator is installed.
The Loki Operator is installed.
A supported LokiStack instance is deployed on the cluster.
Create a service account for the OpenTelemetry Collector.
serviceAccount
objectapiVersion: v1
kind: serviceAccount
metadata:
name: otel-collector-deployment
namespace: openshift-logging
Create a cluster role that grants the Collector’s service account the permissions to push logs to the LokiStack application tenant.
ClusterRole
objectapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector-logs-writer
rules:
- apiGroups: ["loki.grafana.com"]
resourceNames: ["logs"]
resources: ["application"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods", "namespaces", "nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
Bind the cluster role to the service account.
ClusterRoleBinding
objectapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-collector-logs-writer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otel-collector-logs-writer
subjects:
- kind: serviceAccount
name: otel-collector-deployment
namespace: openshift-logging
Create an OpenTelemetryCollector
custom resource (CR) object.
OpenTelemetryCollector
CR objectapiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: openshift-logging
spec:
serviceAccount: otel-collector-deployment
config:
extensions:
bearertokenauth:
filename: "/var/run/secrets/kubernetes.io/serviceaccount/token"
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors:
k8sattributes:
auth_type: "serviceAccount"
passthrough: false
extract:
metadata:
- k8s.pod.name
- k8s.container.name
- k8s.namespace.name
labels:
- tag_name: app.label.component
key: app.kubernetes.io/component
from: pod
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.name
- from: resource_attribute
name: k8s.container.name
- from: resource_attribute
name: k8s.namespace.name
- sources:
- from: connection
resource:
attributes: (1)
- key: loki.format (2)
action: insert
value: json
- key: kubernetes_namespace_name
from_attribute: k8s.namespace.name
action: upsert
- key: kubernetes_pod_name
from_attribute: k8s.pod.name
action: upsert
- key: kubernetes_container_name
from_attribute: k8s.container.name
action: upsert
- key: log_type
value: application
action: upsert
- key: loki.resource.labels (3)
value: log_type, kubernetes_namespace_name, kubernetes_pod_name, kubernetes_container_name
action: insert
transform:
log_statements:
- context: log
statements:
- set(attributes["level"], ConvertCase(severity_text, "lower"))
exporters:
loki:
endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/loki/api/v1/push (4)
tls:
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
auth:
authenticator: bearertokenauth
debug:
verbosity: detailed
service:
extensions: [bearertokenauth] (5)
pipelines:
logs:
receivers: [otlp]
processors: [k8sattributes, transform, resource]
exporters: [loki] (6)
logs/test:
receivers: [otlp]
processors: []
exporters: [debug]
1 | Provides the following resource attributes to be used by the web console: kubernetes_namespace_name , kubernetes_pod_name , kubernetes_container_name , and log_type . If you specify them as values for this loki.resource.labels attribute, then the Loki Exporter processes them as labels. |
2 | Configures the format of Loki logs. Supported values are json , logfmt and raw . |
3 | Configures which resource attributes are processed as Loki labels. |
4 | Points the Loki Exporter to the gateway of the LokiStack logging-loki instance and uses the application tenant. |
5 | Enables the BearerTokenAuth Extension that is required by the Loki Exporter. |
6 | Enables the Loki Exporter to export logs from the Collector. |
You can deploy
|