This is a cache of https://docs.openshift.com/container-platform/4.3/monitoring/monitoring-your-own-services.html. It is a snapshot of the page at 2024-11-23T02:07:24.062+0000.
Monitoring your own services | Monitoring | OpenShift Container Platform 4.3
×

You can use OpenShift Monitoring for your own services in addition to monitoring the cluster. This way, you do not need to use an additional monitoring solution. This helps keeping monitoring centralized. Additionally, you can extend the access to the metrics of your services beyond cluster administrators. This enables developers and arbitrary users to access these metrics.

Custom Prometheus instances and the Prometheus Operator installed through Operator Lifecycle Manager (OLM) can cause issues with user-defined workload monitoring if it is enabled. Custom Prometheus instances are not supported in OpenShift Container Platform.

Monitoring your own services is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Enabling monitoring of your own services

You can enable monitoring your own services by setting the techPreviewUserWorkload/enabled flag in the cluster monitoring configmap.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have installed the OpenShift CLI (oc).

  • You have created the cluster-monitoring-config configmap object.

Procedure
  1. Start editing the cluster-monitoring-config configmap:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. Set the techPreviewUserWorkload setting to true under data/config.yaml:

    apiVersion: v1
    kind: configmap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        techPreviewUserWorkload:
          enabled: true
  3. Save the file to apply the changes. Monitoring your own services is enabled automatically.

  4. Optional: You can check that the prometheus-user-workload pods were created:

    $ oc -n openshift-user-workload-monitoring get pod
    NAME                                   READY   STATUS    RESTARTS   AGE
    prometheus-operator-85bbb7b64d-7jwjd   1/1     Running   0          3m24s
    prometheus-user-workload-0             5/5     Running   1          3m13s
    prometheus-user-workload-1             5/5     Running   1          3m13s
Additional resources

Deploying a sample service

To test monitoring your own services, you can deploy a sample service.

Procedure
  1. Create a YAML file for the service configuration. In this example, it is called prometheus-example-app.yaml.

  2. Fill the file with the configuration for deploying the service:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: ns1
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: prometheus-example-app
      name: prometheus-example-app
      namespace: ns1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: prometheus-example-app
      template:
        metadata:
          labels:
            app: prometheus-example-app
        spec:
          containers:
          - image: quay.io/brancz/prometheus-example-app:v0.2.0
            imagePullPolicy: IfNotPresent
            name: prometheus-example-app
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: prometheus-example-app
      name: prometheus-example-app
      namespace: ns1
    spec:
      ports:
      - port: 8080
        protocol: TCP
        targetPort: 8080
        name: web
      selector:
        app: prometheus-example-app
      type: ClusterIP

    This configuration deploys a service named prometheus-example-app in the ns1 project. This service exposes the custom version metric.

  3. Apply the configuration file to the cluster:

    $ oc apply -f prometheus-example-app.yaml

    It will take some time to deploy the service.

  4. You can check that the service is running:

    $ oc -n ns1 get pod
    NAME                                      READY     STATUS    RESTARTS   AGE
    prometheus-example-app-7857545cb7-sbgwq   1/1       Running   0          81m

Creating a role for setting up metrics collection

This procedure shows how to create a role that allows a user to set up metrics collection for a service as described in "Setting up metrics collection".

Procedure
  1. Create a YAML file for the new role. In this example, it is called custom-metrics-role.yaml.

  2. Fill the file with the configuration for the monitor-crd-edit role:

    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: monitor-crd-edit
    rules:
    - apiGroups: ["monitoring.coreos.com"]
      resources: ["prometheusrules", "servicemonitors", "podmonitors"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

    This role enables a user to set up metrics collection for services.

  3. Apply the configuration file to the cluster:

    $ oc apply -f custom-metrics-role.yaml

    Now the role is created.

Granting the role to a user

This procedure shows how to assign the monitor-crd-edit role to a user.

Prerequisites
  • You need to have a user created.

  • You need to have the monitor-crd-edit role described in "Creating a role for setting up metrics collection" created.

Procedure
  1. In the Web console, navigate to User ManagementRole BindingsCreate Binding.

  2. In Binding Type, select the "Namespace Role Binding" type.

  3. In Name, enter a name for the binding.

  4. In Namespace, select the namespace where you want to grant the access.

  5. In Role Name, enter monitor-crd-edit.

  6. In Subject, select User.

  7. In Subject Name, enter name of the user, for example johnsmith.

  8. Confirm the role binding. Now the user has been assigned the monitor-crd-edit role, which allows him to set up metrics collection for a service in the namespace.

Setting up metrics collection

To use the metrics exposed by your service, you need to configure OpenShift Monitoring to scrape metrics from the /metrics endpoint. You can do this using a ServiceMonitor, a custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor, a CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a Pod.

This procedure shows how to create a ServiceMonitor for the service.

Prerequisites
  • Log in as a cluster administrator or a user with the monitor-crd-edit role.

Procedure
  1. Create a YAML file for the ServiceMonitor configuration. In this example, the file is called example-app-service-monitor.yaml.

  2. Fill the file with the configuration for creating the ServiceMonitor:

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      labels:
        k8s-app: prometheus-example-monitor
      name: prometheus-example-monitor
      namespace: ns1
    spec:
      endpoints:
      - interval: 30s
        port: web
        scheme: http
      selector:
        matchLabels:
          app: prometheus-example-app

    This configuration makes OpenShift Monitoring scrape the metrics exposed by the sample service deployed in "Deploying a sample service", which includes the single version metric.

  3. Apply the configuration file to the cluster:

    $ oc apply -f example-app-service-monitor.yaml

    It will take some time to deploy the ServiceMonitor.

  4. You can check that the ServiceMonitor is running:

    $ oc -n ns1 get servicemonitor
    NAME                         AGE
    prometheus-example-monitor   81m
Additional resources

See the Prometheus Operator API documentation for more information on ServiceMonitors and PodMonitors.

Creating alerting rules

You can create alerting rules, which will fire alerts based on values of chosen metrics of the service.

In the current version of the Technology Preview, only administrators can access alerting rules using the Prometheus UI and the Web Console.

Procedure
  1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml.

  2. Fill the file with the configuration for the alerting rules:

    The expression can only reference metrics exposed by your own services. Currently it is not possible to correlate existing cluster metrics.

    apiVersion: monitoring.coreos.com/v1
    kind: PrometheusRule
    metadata:
      name: example-alert
      namespace: ns1
    spec:
      groups:
      - name: example
        rules:
        - alert: VersionAlert
          expr: version{job="prometheus-example-app"} == 0

    This configuration creates an alerting rule named example-alert, which fires an alert when the version metric exposed by the sample service becomes 0.

  3. Apply the configuration file to the cluster:

    $ oc apply -f example-app-alerting-rule.yaml

    It will take some time to create the alerting rules.

Giving view access to a user

By default, only cluster administrator users and developers have access to metrics from your services. This procedure shows how to grant metrics access to a particular project to an arbitrary user.

Prerequisites
  • You need to have a user created.

  • You need to log in as a cluster administrator.

Procedure
  • Run this command to give <user> access to all metrics of your services in <namespace>:

    $ oc policy add-role-to-user view <user> -n <namespace>

    For example, to give view access to the ns1 namespace to user bobwilliams, run:

    $ oc policy add-role-to-user view bobwilliams -n ns1
  • Alternatively, in the Web console, switch to the Developer Perspective, and click AdvancedProject Access. From there, you can select the correct namespace and assign the view role to a user.

Accessing the metrics of your service

Once you have enabled monitoring your own services, deployed a service, and set up metrics collection for it, you can access the metrics of the service as a cluster administrator, as a developer, or as a user with view permissions for the project.

The Grafana instance shipped within OpenShift Container Platform Monitoring is read-only and displays only infrastructure-related dashboards.

Prerequisites
  • You need to deploy the service that you want to monitor.

  • You need to enable monitoring of your own services.

  • You need to have metrics scraping set up for the service.

  • You need to log in as a cluster administrator, a developer, or as a user with view permissions for the project.

Procedure
  1. Access the Prometheus web interface:

    • To access the metrics as a cluster administrator, go to the OpenShift Container Platform web console, switch to the Administrator Perspective, and click MonitoringMetrics.

      Cluster administrators, when using the Administrator Perspective, have access to all cluster metrics and to custom service metrics from all projects.

      Only cluster administrators have access to the Alertmanager and Prometheus UIs.

    • To access the metrics as a developer or a user with permissions, go to the OpenShift Container Platform web console, switch to the Developer Perspective, then click AdvancedMetrics. Select the project you want to see the metrics for.

      Developers can only use the Developer Perspective. They can only query metrics from a single project.

  2. Use the PromQL interface to run queries for your services.

Additional resources