$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
In OpenShift Container Platform 4.11, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in OpenShift Container Platform without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects.
Versions of Prometheus Operator installed using Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring. Therefore, custom Prometheus instances installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform. |
Cluster administrators can enable monitoring for user-defined projects by setting the enableUserWorkload: true
field in the cluster monitoring ConfigMap
object.
In OpenShift Container Platform 4.11 you must remove any custom Prometheus instances before enabling monitoring for user-defined projects. |
You must have access to the cluster as a user with the |
You have access to the cluster as a user with the cluster-admin
cluster role.
You have installed the OpenShift cli (oc
).
You have created the cluster-monitoring-config
ConfigMap
object.
You have optionally created and configured the user-workload-monitoring-config
ConfigMap
object in the openshift-user-workload-monitoring
project. You can add configuration options to this ConfigMap
object for the components that monitor user-defined projects.
Every time you save configuration changes to the |
Edit the cluster-monitoring-config
ConfigMap
object:
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Add enableUserWorkload: true
under data/config.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true (1)
1 | When set to true , the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster. |
Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically.
When changes are saved to the |
Check that the prometheus-operator
, prometheus-user-workload
and thanos-ruler-user-workload
pods are running in the openshift-user-workload-monitoring
project. It might take a short while for the pods to start:
$ oc -n openshift-user-workload-monitoring get pod
NAME READY STATUS RESTARTS AGE
prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h
prometheus-user-workload-0 4/4 Running 1 3h
prometheus-user-workload-1 4/4 Running 1 3h
thanos-ruler-user-workload-0 3/3 Running 0 3h
thanos-ruler-user-workload-1 3/3 Running 0 3h
Cluster administrators can monitor all core OpenShift Container Platform and user-defined projects.
Cluster administrators can grant developers and other users permission to monitor their own projects. Privileges are granted by assigning one of the following monitoring roles:
The monitoring-rules-view cluster role provides read access to PrometheusRule
custom resources for a project.
The monitoring-rules-edit cluster role grants a user permission to create, modify, and deleting PrometheusRule
custom resources for a project.
The monitoring-edit cluster role grants the same privileges as the monitoring-rules-edit
cluster role. Additionally, it enables a user to create new scrape targets for services or pods. With this role, you can also create, modify, and delete ServiceMonitor
and PodMonitor
resources.
You can also grant users permission to configure the components that are responsible for monitoring user-defined projects:
The user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring
project enables you to edit the user-workload-monitoring-config
ConfigMap
object. With this role, you can edit the ConfigMap
object to configure Prometheus, Prometheus Operator, and Thanos Ruler for user-defined workload monitoring.
You can also grant users permission to configure alert routing for user-defined projects:
The alert-routing-edit cluster role grants a user permission to create, update, and delete AlertmanagerConfig
custom resources for a project.
This section provides details on how to assign these roles by using the OpenShift Container Platform web console or the cli.
You can grant users permissions to monitor their own projects, by using the OpenShift Container Platform web console.
You have access to the cluster as a user with the cluster-admin
cluster role.
The user account that you are assigning the role to already exists.
In the Administrator perspective within the OpenShift Container Platform web console, navigate to User Management → Role Bindings → Create Binding.
In the Binding Type section, select the "Namespace Role Binding" type.
In the Name field, enter a name for the role binding.
In the Namespace field, select the user-defined project where you want to grant the access.
The monitoring role will be bound to the project that you apply in the Namespace field. The permissions that you grant to a user by using this procedure will apply only to the selected project. |
Select monitoring-rules-view
, monitoring-rules-edit
, or monitoring-edit
in the Role Name list.
In the Subject section, select User.
In the Subject Name field, enter the name of the user.
Select Create to apply the role binding.
You can grant users permissions to monitor their own projects, by using the OpenShift cli (oc
).
You have access to the cluster as a user with the cluster-admin
cluster role.
The user account that you are assigning the role to already exists.
You have installed the OpenShift cli (oc
).
Assign a monitoring role to a user for a project:
$ oc policy add-role-to-user <role> <user> -n <namespace> (1)
1 | Substitute <role> with monitoring-rules-view , monitoring-rules-edit , or monitoring-edit . |
Whichever role you choose, you must bind it against a specific project as a cluster administrator. |
As an example, substitute <role>
with monitoring-edit
, <user>
with johnsmith
, and <namespace>
with ns1
. This assigns the user johnsmith
permission to set up metrics collection and to create alerting rules in the ns1
namespace.
You can grant users permission to configure monitoring for user-defined projects.
You have access to the cluster as a user with the cluster-admin
cluster role.
The user account that you are assigning the role to already exists.
You have installed the OpenShift cli (oc
).
Assign the user-workload-monitoring-config-edit
role to a user in the openshift-user-workload-monitoring
project:
$ oc -n openshift-user-workload-monitoring adm policy add-role-to-user \
user-workload-monitoring-config-edit <user> \
--role-namespace openshift-user-workload-monitoring
Learn how to query Prometheus statistics from the command line when monitoring your own services. You can access monitoring data from outside the cluster with the thanos-querier
route.
You deployed your own service, following the Enabling monitoring for user-defined projects procedure.
Extract a token to connect to Prometheus:
$ SECRET=`oc get secret -n openshift-user-workload-monitoring | grep prometheus-user-workload-token | head -n 1 | awk '{print $1 }'`
$ TOKEN=`echo $(oc get secret $SECRET -n openshift-user-workload-monitoring -o json | jq -r '.data.token') | base64 -d`
Extract your route host:
$ THANOS_QUERIER_HOST=`oc get route thanos-querier -n openshift-monitoring -o json | jq -r '.spec.host'`
Query the metrics of your own services in the command line. For example:
$ NAMESPACE=ns1
$ curl -X GET -kG "https://$THANOS_QUERIER_HOST/api/v1/query?" --data-urlencode "query=up{namespace='$NAMESPACE'}" -H "Authorization: Bearer $TOKEN"
The output will show you the duration that your application pods have been up.
{"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"up","endpoint":"web","instance":"10.129.0.46:8080","job":"prometheus-example-app","namespace":"ns1","pod":"prometheus-example-app-68d47c4fb6-jztp2","service":"prometheus-example-app"},"value":[1591881154.748,"1"]}]}}
Individual user-defined projects can be excluded from user workload monitoring. To do so, simply add the openshift.io/user-monitoring
label to the project’s namespace with a value of false
.
Add the label to the project namespace:
$ oc label namespace my-project 'openshift.io/user-monitoring=false'
To re-enable monitoring, remove the label from the namespace:
$ oc label namespace my-project 'openshift.io/user-monitoring-'
If there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label. |
After enabling monitoring for user-defined projects, you can disable it again by setting enableUserWorkload: false
in the cluster monitoring ConfigMap
object.
Alternatively, you can remove |
Edit the cluster-monitoring-config
ConfigMap
object:
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Set enableUserWorkload:
to false
under data/config.yaml
:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: false
Save the file to apply the changes. Monitoring for user-defined projects is then disabled automatically.
Check that the prometheus-operator
, prometheus-user-workload
and thanos-ruler-user-workload
pods are terminated in the openshift-user-workload-monitoring
project. This might take a short while:
$ oc -n openshift-user-workload-monitoring get pod
No resources found in openshift-user-workload-monitoring project.
The |