This is a cache of https://docs.okd.io/latest/observability/monitoring/configuring-core-platform-monitoring/configuring-alerts-and-notifications.html. It is a snapshot of the page at 2025-02-18T19:11:31.057+0000.
Configuring alerts and notifications - Monitoring | Observability | OKD 4
×

Configuring external Alertmanager instances

The OKD monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus.

You can add external Alertmanager instances to route alerts for core OKD projects.

If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have created the cluster-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add an additionalAlertmanagerConfigs section with configuration details under data/config.yaml/prometheusK8s:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          additionalAlertmanagerConfigs:
          - <alertmanager_specification> (1)
    1 Substitute <alertmanager_specification> with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token (bearerToken) and client TLS (tlsConfig).

    The following sample config map configures an additional Alertmanager for Prometheus by using a bearer token with client TLS authentication:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          additionalAlertmanagerConfigs:
          - scheme: https
            pathPrefix: /
            timeout: "30s"
            apiVersion: v1
            bearerToken:
              name: alertmanager-bearer-token
              key: token
            tlsConfig:
              key:
                name: alertmanager-tls
                key: tls.key
              cert:
                name: alertmanager-tls
                key: tls.crt
              ca:
                name: alertmanager-tls
                key: tls.ca
            staticConfigs:
            - external-alertmanager1-remote.com
            - external-alertmanager1-remote2.com
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

Disabling the local Alertmanager

A local Alertmanager that routes alerts from Prometheus instances is enabled by default in the openshift-monitoring project of the OKD monitoring stack.

If you do not need the local Alertmanager, you can disable it by configuring the cluster-monitoring-config config map in the openshift-monitoring project.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have created the cluster-monitoring-config config map.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add enabled: false for the alertmanagerMain component under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          enabled: false
  3. Save the file to apply the changes. The Alertmanager instance is disabled automatically when you apply the change.

Additional resources

Configuring secrets for Alertmanager

The OKD monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver.

For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object.

Adding a secret to the Alertmanager configuration

You can add secrets to the Alertmanager configuration by editing the cluster-monitoring-config config map in the openshift-monitoring project.

After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have created the cluster-monitoring-config config map.

  • You have created the secret to be configured in Alertmanager in the openshift-monitoring project.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Add a secrets: section under data/config.yaml/alertmanagerMain with the following configuration:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          secrets: (1)
          - <secret_name_1> (2)
          - <secret_name_2>
    1 This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object.
    2 The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line.

    The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        alertmanagerMain:
          secrets:
          - test-secret-basic-auth
          - test-secret-api-token
  3. Save the file to apply the changes. The new configuration is applied automatically.

Attaching additional labels to your time series and alerts

You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have created the cluster-monitoring-config ConfigMap object.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Edit the cluster-monitoring-config config map in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Define labels you want to add for every metric under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          externalLabels:
            <key>: <value> (1)
    1 Substitute <key>: <value> with key-value pairs where <key> is a unique name for the new label and <value> is its value.
    • Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten.

    • Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards.

    For example, to add metadata about the region and environment to all time series and alerts, use the following example:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        prometheusK8s:
          externalLabels:
            region: eu
            environment: prod
  3. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.

Configuring alert notifications

In OKD 4, you can view firing alerts in the Alerting UI. You can configure Alertmanager to send notifications about default platform alerts by configuring alert receivers.

Alertmanager does not send notifications by default. It is strongly recommended to configure Alertmanager to receive notifications by configuring alert receivers through the web console or through the alertmanager-main secret.

Configuring alert routing for default platform alerts

You can configure Alertmanager to send notifications. Customize where and how Alertmanager sends notifications about default platform alerts by editing the default configuration in the alertmanager-main secret in the openshift-monitoring namespace.

All features of a supported version of upstream Alertmanager are also supported in an OKD Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation).

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role.

Procedure
  1. Open the Alertmanager YAML configuration file:

    • To open the Alertmanager configuration from the CLI:

      1. Print the currently active Alertmanager configuration from the alertmanager-main secret into alertmanager.yaml file:

        $ oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
      2. Open the alertmanager.yaml file.

    • To open the Alertmanager configuration from the OKD web console:

      1. Go to the AdministrationCluster SettingsConfigurationAlertmanagerYAML page of the web console.

  2. Edit the Alertmanager configuration by updating parameters in the YAML:

    global:
      resolve_timeout: 5m
      http_config:
        proxy_from_environment: true (1)
    route:
      group_wait: 30s (2)
      group_interval: 5m (3)
      repeat_interval: 12h (4)
      receiver: default
      routes:
      - matchers:
        - "alertname=Watchdog"
        repeat_interval: 2m
        receiver: watchdog
      - matchers:
        - "service=<your_service>" (5)
        routes:
        - matchers:
          - <your_matching_rules> (6)
          receiver: <receiver> (7)
    receivers:
    - name: default
    - name: watchdog
    - name: <receiver>
      <receiver_configuration> (8)
    1 If you configured an HTTP cluster-wide proxy, set the proxy_from_environment parameter to true to enable proxying for all alert receivers.
    2 Specify how long Alertmanager waits while collecting initial alerts for a group of alerts before sending a notification.
    3 Specify how much time must elapse before Alertmanager sends a notification about new alerts added to a group of alerts for which an initial notification was already sent.
    4 Specify the minimum amount of time that must pass before an alert notification is repeated. If you want a notification to repeat at each group interval, set the repeat_interval value to less than the group_interval value. The repeated notification can still be delayed, for example, when certain Alertmanager pods are restarted or rescheduled.
    5 Specify the name of the service that fires the alerts.
    6 Specify labels to match your alerts.
    7 Specify the name of the receiver to use for the alerts.
    8 Specify the receiver configuration.
    • Use the matchers key name to indicate the matchers that an alert has to fulfill to match the node. Do not use the match or match_re key names, which are both deprecated and planned for removal in a future release.

    • If you define inhibition rules, use the following key names:

      • target_matchers: to indicate the target matchers

      • source_matchers: to indicate the source matchers

      Do not use the target_match, target_match_re, source_match, or source_match_re key names, which are deprecated and planned for removal in a future release.

    Example of Alertmanager configuration with PagerDuty as an alert receiver
    global:
      resolve_timeout: 5m
      http_config:
        proxy_from_environment: true
    route:
      group_wait: 30s
      group_interval: 5m
      repeat_interval: 12h
      receiver: default
      routes:
      - matchers:
        - "alertname=Watchdog"
        repeat_interval: 2m
        receiver: watchdog
      - matchers: (1)
        - "service=example-app"
        routes:
        - matchers:
          - "severity=critical"
          receiver: team-frontend-page
    receivers:
    - name: default
    - name: watchdog
    - name: team-frontend-page
      pagerduty_configs:
      - service_key: "<your_key>"
        http_config: (2)
          proxy_from_environment: true
          authorization:
            credentials: xxxxxxxxxx
    1 Alerts of critical severity that are fired by the example-app service are sent through the team-frontend-page receiver. Typically, these types of alerts would be paged to an individual or a critical response team.
    2 Custom HTTP configuration for a specific receiver. If you configure the custom HTTP configuration for a specific alert receiver, that receiver does not inherit the global HTTP config settings.
  3. Apply the new configuration in the file:

    • To apply the changes from the CLI, run the following command:

      $ oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml |  oc -n openshift-monitoring replace secret --filename=-
    • To apply the changes from the OKD web console, click Save.

Configuring alert routing with the OKD web console

You can configure alert routing through the OKD web console to ensure that you learn about important issues with your cluster.

The OKD web console provides fewer settings to configure alert routing than the alertmanager-main secret. To configure alert routing with the access to more configuration settings, see "Configuring alert routing for default platform alerts".

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role.

Procedure
  1. In the Administrator perspective, go to AdministrationCluster SettingsConfigurationAlertmanager.

    Alternatively, you can go to the same page through the notification drawer. Select the bell icon at the top right of the OKD web console and choose Configure in the AlertmanagerReceiverNotConfigured alert.

  2. Click Create Receiver in the Receivers section of the page.

  3. In the Create Receiver form, add a Receiver name and choose a Receiver type from the list.

  4. Edit the receiver configuration:

    • For PagerDuty receivers:

      1. Choose an integration type and add a PagerDuty integration key.

      2. Add the URL of your PagerDuty installation.

      3. Click Show advanced configuration if you want to edit the client and incident details or the severity specification.

    • For webhook receivers:

      1. Add the endpoint to send HTTP POST requests to.

      2. Click Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver.

    • For email receivers:

      1. Add the email address to send notifications to.

      2. Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details.

      3. Select whether TLS is required.

      4. Click Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration.

    • For Slack receivers:

      1. Add the URL of the Slack webhook.

      2. Add the Slack channel or user name to send notifications to.

      3. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames.

  5. By default, firing alerts with labels that match all of the selectors are sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver, perform the following steps:

    1. Add routing label names and values in the Routing labels section of the form.

    2. Click Add label to add further routing labels.

  6. Click Create to create the receiver.

Configuring different alert receivers for default platform alerts and user-defined alerts

You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results:

  • All default platform alerts are sent to a receiver owned by the team in charge of these alerts.

  • All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts.

You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts:

  • Use the openshift_io_alert_source="platform" matcher to match default platform alerts.

  • Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts.

This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts.