This is a cache of https://docs.openshift.com/container-platform/4.2/monitoring/cluster_monitoring/configuring-the-monitoring-stack.html. It is a snapshot of the page at 2024-11-23T02:35:49.208+0000.
Configuring the monitoring stack - Cluster monitoring | Monitoring | OpenShift Container Platform 4.2
×

Prior to OpenShift Container Platform 4, the Prometheus Cluster Monitoring stack was configured through the Ansible inventory file. For that purpose, the stack exposed a subset of its available configuration options as Ansible variables. You configured the stack before you installed OpenShift Container Platform.

In OpenShift Container Platform 4, Ansible is not the primary technology to install OpenShift Container Platform anymore. The installation program provides only a very low number of configuration options before installation. Configuring most OpenShift framework components, including the Prometheus Cluster Monitoring stack, happens post-installation.

This section explains what configuration is supported, shows how to configure the monitoring stack, and demonstrates several common configuration scenarios.

Prerequisites
  • The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources.

Maintenance and support

The supported way of configuring OpenShift Container Platform Monitoring is by configuring it using the options described in this document. Do not use other configurations, as they are unsupported. Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this section, your changes will disappear because the cluster-monitoring-operator reconciles any differences. The operator reverses everything to the defined state by default and by design.

Explicitly unsupported cases include:

  • Creating additional ServiceMonitor objects in the openshift-* namespaces. This extends the targets the cluster monitoring Prometheus instance scrapes, which can cause collisions and load differences that cannot be accounted for. These factors might make the Prometheus setup unstable.

  • Creating unexpected configmap objects or PrometheusRule objects. This causes the cluster monitoring Prometheus instance to include additional alerting and recording rules.

  • Modifying resources of the stack. The Prometheus Monitoring Stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them.

  • Using resources of the stack for your purposes. The resources created by the Prometheus Cluster Monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility.

  • Stopping the Cluster Monitoring Operator from reconciling the monitoring stack.

  • Adding new alerting rules.

  • Modifying the monitoring stack Grafana instance.

Creating cluster monitoring configmap

To configure the Prometheus Cluster Monitoring stack, you must create the cluster monitoring configmap.

Prerequisites
  • An installed oc CLI tool

  • Administrative privileges for the cluster

Procedure
  1. Check whether the cluster-monitoring-config configmap object exists:

    $ oc -n openshift-monitoring get configmap cluster-monitoring-config
  2. If it does not exist, create it:

    $ oc -n openshift-monitoring create configmap cluster-monitoring-config
  3. Start editing the cluster-monitoring-config configmap:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  4. Create the data section if it does not exist yet:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |

Configuring the cluster monitoring stack

You can configure the Prometheus Cluster Monitoring stack using configmaps. configmaps configure the Cluster Monitoring Operator, which in turn configures components of the stack.

Prerequisites
  • Make sure you have the cluster-monitoring-config configmap object with the data/config.yaml section.

Procedure
  1. Start editing the cluster-monitoring-config configmap:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Put your configuration under data/config.yaml as key-value pair <component_name>: <component_configuration>:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        <component>:
          <configuration_for_the_component>

    Substitute <component> and <configuration_for_the_component> accordingly.

    For example, create this configmap to configure a Persistent Volume Claim (PVC) for Prometheus:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          volumeClaimTemplate:
           spec:
             storageClassName: fast
             volumeMode: filesystem
             resources:
               requests:
                 storage: 40Gi

    Here, prometheusK8s defines the Prometheus component and the following lines define its configuration.

  3. Save the file to apply the changes. The pods affected by the new configuration are restarted automatically.

Configurable monitoring components

This table shows the monitoring components you can configure and the keys used to specify the components in the configmap:

Table 1. Configurable monitoring components
Component Key

Prometheus Operator

prometheusOperator

Prometheus

prometheusK8s

Alertmanager

alertmanagerMain

kube-state-metrics

kubeStateMetrics

openshift-state-metrics

openshiftStateMetrics

Grafana

grafana

Telemeter Client

telemeterClient

Prometheus Adapter

k8sPrometheusAdapter

From this list, only Prometheus and Alertmanager have extensive configuration options. All other components usually provide only the nodeSelector field for being deployed on a specified node.

Moving monitoring components to different nodes

You can move any of the monitoring stack components to specific nodes.

Prerequisites
  • Make sure you have the cluster-monitoring-config configmap object with the data/config.yaml section.

Procedure
  1. Start editing the cluster-monitoring-config configmap:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Specify the nodeSelector constraint for the component under data/config.yaml:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        <component>:
          nodeSelector:
            <node_key>: <node_value>
            <node_key>: <node_value>
            <...>

    Substitute <component> accordingly and substitute <node_key>: <node_value> with the map of key-value pairs that specifies the destination node. Often, only a single key-value pair is used.

    The component can only run on a node that has each of the specified key-value pairs as labels. The node can have additional labels as well.

    For example, to move components to the node that is labeled foo: bar, use:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusOperator:
          nodeSelector:
            foo: bar
        prometheusK8s:
          nodeSelector:
            foo: bar
        alertmanagerMain:
          nodeSelector:
            foo: bar
        kubeStateMetrics:
          nodeSelector:
            foo: bar
        grafana:
          nodeSelector:
            foo: bar
        telemeterClient:
          nodeSelector:
            foo: bar
        k8sPrometheusAdapter:
          nodeSelector:
            foo: bar
        openshiftStateMetrics:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
  3. Save the file to apply the changes. The components affected by the new configuration are moved to new nodes automatically.

Additional resources

Assigning tolerations to monitoring components

You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes.

Prerequisites
  • Make sure you have the cluster-monitoring-config configmap object with the data/config.yaml section.

Procedure
  1. Start editing the cluster-monitoring-config configmap:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Specify tolerations for the component:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        <component>:
          tolerations:
            <toleration_specification>

    Substitute <component> and <toleration_specification> accordingly.

    For example, a oc adm taint nodes node1 key1=value1:NoSchedule taint prevents the scheduler from placing pods in the foo: bar node. To make the alertmanagerMain component ignore that taint and to place alertmanagerMain in foo: bar normally, use this toleration:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          nodeSelector:
            foo: bar
          tolerations:
          - key: "key1"
            operator: "Equal"
            value: "value1"
            effect: "NoSchedule"
  3. Save the file to apply the changes. The new component placement configuration is applied automatically.

Additional resources

Configuring persistent storage

Running cluster monitoring with persistent storage means that your metrics are stored to a persistent volume (PV) and can survive a pod being restarted or recreated. This is ideal if you require your metrics or alerting data to be guarded from data loss. For production environments, it is highly recommended to configure persistent storage. Because of the high IO demands, it is advantageous to use local storage.

Prerequisites
  • Dedicate sufficient local persistent storage to ensure that the disk does not become full. How much storage you need depends on the number of pods. For information on system requirements for persistent storage, see Prometheus database storage requirements.

  • Make sure you have a persistent Volume (PV) ready to be claimed by the persistent volume claim (PVC), one PV for each replica. Because Prometheus has two replicas and Alertmanager has three replicas, you need five PVs to support the entire monitoring stack. The PVs should be available from the Local Storage Operator. This does not apply if you enable dynamically provisioned storage.

  • Use the block type of storage.

  • Configure local persistent storage.

Configuring a local persistent volume claim

For the Prometheus or Alertmanager to use a persistent volume (PV), you first must configure a persistent volume claim (PVC).

Prerequisites
  • Make sure you have the cluster-monitoring-config configmap object with the data/config.yaml section.

Procedure
  1. Edit the cluster-monitoring-config configmap:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Put your PVC configuration for the component under data/config.yaml:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        <component>:
          volumeClaimTemplate:
            metadata:
              name: <PVC_name_prefix>
            spec:
              storageClassName: <storage_class>
              resources:
                requests:
                  storage: <amount_of_storage>

    See the Kubernetes documentation on PersistentVolumeClaims for information on how to specify volumeClaimTemplate.

    For example, to configure a PVC that claims local persistent storage for Prometheus, use:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          volumeClaimTemplate:
            metadata:
              name: localpvc
            spec:
              storageClassName: local-storage
              resources:
                requests:
                  storage: 40Gi

    In the above example, the storage class created by the Local Storage Operator is called local-storage.

    To configure a PVC that claims local persistent storage for Alertmanager, use:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          volumeClaimTemplate:
            metadata:
              name: localpvc
            spec:
              storageClassName: local-storage
              resources:
                requests:
                  storage: 40Gi
  3. Save the file to apply the changes. The pods affected by the new configuration are restarted automatically and the new storage configuration is applied.

Modifying retention time for Prometheus metrics data

By default, the Prometheus Cluster Monitoring stack configures the retention time for Prometheus data to be 15 days. You can modify the retention time to change how soon the data is deleted.

Prerequisites
  • Make sure you have the cluster-monitoring-config configmap object with the data/config.yaml section.

Procedure
  1. Start editing the cluster-monitoring-config configmap:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Put your retention time configuration under data/config.yaml:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          retention: <time_specification>

    Substitute <time_specification> with a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years).

    For example, to configure retention time to be 24 hours, use:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          retention: 24h
  3. Save the file to apply the changes. The pods affected by the new configuration are restarted automatically.

Configuring Alertmanager

The Prometheus Alertmanager is a component that manages incoming alerts, including:

  • Alert silencing

  • Alert inhibition

  • Alert aggregation

  • Reliable deduplication of alerts

  • Grouping alerts

  • Sending grouped alerts as notifications through receivers such as email, PagerDuty, and HipChat

Alertmanager default configuration

The default configuration of the OpenShift Container Platform Monitoring Alertmanager cluster is this:

global:
  resolve_timeout: 5m
route:
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 12h
  receiver: default
  routes:
  - match:
      alertname: Watchdog
    repeat_interval: 5m
    receiver: watchdog
receivers:
- name: default
- name: watchdog

OpenShift Container Platform monitoring ships with the Watchdog alert, which fires continuously. Alertmanager repeatedly sends notifications for the Watchdog alert to the notification provider, for example, to PagerDuty. The provider is usually configured to notify the administrator when it stops receiving the Watchdog alert. This mechanism helps ensure continuous operation of Prometheus as well as continuous communication between Alertmanager and the notification provider.

Applying custom Alertmanager configuration

You can overwrite the default Alertmanager configuration by editing the alertmanager-main secret inside the openshift-monitoring namespace.

Prerequisites
  • An installed jq tool for processing JSON data

Procedure
  1. Print the currently active Alertmanager configuration into file alertmanager.yaml:

    $ oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' |base64 -d > alertmanager.yaml
  2. Change the configuration in file alertmanager.yaml to your new configuration:

    global:
      resolve_timeout: 5m
    route:
      group_wait: 30s
      group_interval: 5m
      repeat_interval: 12h
      receiver: default
      routes:
      - match:
          alertname: Watchdog
        repeat_interval: 5m
        receiver: watchdog
      - match:
          service: <your_service> (1)
        routes:
        - match:
            <your_matching_rules> (2)
          receiver: <receiver> (3)
    receivers:
    - name: default
    - name: watchdog
    - name: <receiver>
      <receiver_configuration>
    1 service specifies the service that fires the alerts.
    2 <your_matching_rules> specify the target alerts.
    3 receiver specifies the receiver to use for the alert.

    For example, this listing configures PagerDuty for notifications:

    global:
      resolve_timeout: 5m
    route:
      group_wait: 30s
      group_interval: 5m
      repeat_interval: 12h
      receiver: default
      routes:
      - match:
          alertname: Watchdog
        repeat_interval: 5m
        receiver: watchdog
      - match:
          service: example-app
        routes:
        - match:
            severity: critical
          receiver: team-frontend-page
    receivers:
    - name: default
    - name: watchdog
    - name: team-frontend-page
      pagerduty_configs:
      - service_key: "your-key"

    With this configuration, alerts of critical severity fired by the example-app service are sent using the team-frontend-page receiver, which means that these alerts are paged to a chosen person.

  3. Apply the new configuration in the file:

    $ oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run -o=yaml |  oc -n openshift-monitoring replace secret --filename=-
Additional resources

Alerting rules

OpenShift Container Platform Cluster Monitoring by default ships with a set of pre-defined alerting rules.

Note that:

  • The default alerting rules are used specifically for the OpenShift Container Platform cluster and nothing else. For example, you get alerts for a persistent volume in the cluster, but you do not get them for persistent volume in your custom namespace.

  • Currently you cannot add custom alerting rules.

  • Some alerting rules have identical names. This is intentional. They are sending alerts about the same event with different thresholds, with different severity, or both.

  • With the inhibition rules, the lower severity is inhibited when the higher severity is firing.

Listing acting alerting rules

You can list the alerting rules that currently apply to the cluster.

Procedure
  1. Configure the necessary port forwarding:

    $ oc -n openshift-monitoring port-forward svc/prometheus-operated 9090
  2. Fetch the JSON object containing acting alerting rules and their properties:

    $ curl -s http://localhost:9090/api/v1/rules | jq '[.data.groups[].rules[] | select(.type=="alerting")]'
    [
      {
        "name": "ClusterOperatorDown",
        "query": "cluster_operator_up{job=\"cluster-version-operator\"} == 0",
        "duration": 600,
        "labels": {
          "severity": "critical"
        },
        "annotations": {
          "message": "Cluster operator {{ $labels.name }} has not been available for 10 mins. Operator may be down or disabled, cluster will not be kept up to date and upgrades will not be possible."
        },
        "alerts": [],
        "health": "ok",
        "type": "alerting"
      },
      {
        "name": "ClusterOperatorDegraded",
        ...
Additional resources
Next steps