apiVersion: v1
kind: Namespace
metadata:
name: openshift-metering (1)
annotations:
openshift.io/node-selector: "" (2)
labels:
openshift.io/cluster-monitoring: "true"
Review the following sections before installing metering into your cluster.
To get started installing metering, first install the Metering Operator from OperatorHub. Next, configure your instance of metering by creating a CustomResource
, referred to here as your MeteringConfig. Installing the Metering Operator creates a default MeteringConfig that you can modify using the examples in the documentation. After creating your MeteringConfig, install the metering stack. Last, verify your installation.
Metering requires the following components:
A StorageClass for dynamic volume provisioning. Metering supports a number of different storage solutions.
4GB memory and 4 CPU cores available cluster capacity and at least one node with 2 CPU cores and 2GB memory capacity available.
The minimum resources needed for the largest single Pod installed by metering are 2GB of memory and 2 CPU cores.
Memory and CPU consumption may often be lower, but will spike when running reports, or collecting data for larger clusters.
You can install metering by deploying the Metering Operator. The Metering Operator creates and manages the components of the metering stack.
You cannot create a Project starting with |
You can use the OpenShift Container Platform web console to install the Metering Operator.
Create a namespace object YAML file for the Metering Operator with the oc create -f <file-name>.yaml
command. You must use the cli to create the namespace. For example, metering-namespace.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-metering (1)
annotations:
openshift.io/node-selector: "" (2)
labels:
openshift.io/cluster-monitoring: "true"
1 | It is strongly recommended to deploy metering in the openshift-metering namespace. |
2 | Include this annotation before configuring specific node selectors for the operand Pods. |
In the OpenShift Container Platform web console, click Operators → OperatorHub. Filter for metering
to find the Metering Operator.
click the Metering card, review the package description, and then click Install.
Select an Update Channel, Installation Mode, and Approval Strategy.
click Subscribe.
Verify that the Metering Operator is installed by switching to the Operators → Installed Operators page. The Metering Operator has a Status of Succeeded when the installation is complete.
It might take several minutes for the Metering Operator to appear. |
click Metering on the Installed Operators page for Operator Details. From the Details page you can create different resources related to metering.
To complete the metering installation, create a MeteringConfig resource to configure metering and install the components of the metering stack.
You can use the OpenShift Container Platform cli to install the Metering Operator.
Create a namespace object YAML file for the Metering Operator. You must use the cli to create the namespace. For example, metering-namespace.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-metering (1)
annotations:
openshift.io/node-selector: "" (2)
labels:
openshift.io/cluster-monitoring: "true"
1 | It is strongly recommended to deploy metering in the openshift-metering namespace. |
2 | Include this annotation before configuring specific node selectors for the operand Pods. |
Create the namespace object:
$ oc create -f <file-name>.yaml
For example:
$ oc create -f openshift-metering.yaml
Create the OperatorGroup object YAML file. For example, metering-og
:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-metering (1)
namespace: openshift-metering (2)
spec:
targetNamespaces:
- openshift-metering
1 | The name is arbitrary. |
2 | Specify the openshift-metering namespace. |
Create a Subscription object YAML file to subscribe a namespace to the Metering Operator. This object targets the most recently released version in the redhat-operators
CatalogSource. For example, metering-sub.yaml
:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: metering-ocp (1)
namespace: openshift-metering (2)
spec:
channel: "4.3" (3)
source: "redhat-operators" (4)
sourceNamespace: "openshift-marketplace"
name: "metering-ocp"
installPlanApproval: "Automatic" (5)
1 | The name is arbitrary. |
2 | You must specify the openshift-metering namespace. |
3 | Specify 4.3 as the channel. |
4 | Specify the redhat-operators CatalogSource, which contains the metering-ocp package manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM). |
5 | Specify "Automatic" install plan approval. |
After adding the Metering Operator to your cluster you can install the components of metering by installing the metering stack.
Review the configuration options
Create a MeteringConfig resource. You can begin the following process to generate a default MeteringConfig, then use the examples in the documentation to modify this default file for your specific installation. Review the following topics to create your MeteringConfig resource:
For configuration options, review About configuring metering.
At a minimum, you need to configure persistent storage and configure the Hive metastore.
There can only be one MeteringConfig resource in the |
From the web console, ensure you are on the Operator Details page for the Metering Operator in the openshift-metering
project. You can navigate to this page by clicking Operators → Installed Operators, then selecting the Metering Operator.
Under Provided APIs, click Create Instance on the Metering Configuration card. This opens a YAML editor with the default MeteringConfig file where you can define your configuration.
For example configuration files and all supported configuration options, review the configuring metering documentation. |
Enter your MeteringConfig into the YAML editor and click Create.
The MeteringConfig resource begins to create the necessary resources for your metering stack. You can now move on to verifying your installation.
You can verify the metering installation by performing any of the following checks:
Check the Metering Operator ClusterServiceVersion (CSV) for the metering version. This can be done through either the web console or cli.
Navigate to Operators → Installed Operators in the openshift-metering
namespace.
click Metering Operator.
click Subscription for Subscription Details.
Check the Installed Version.
Check the Metering Operator CSV in the openshift-metering
namespace:
$ oc --namespace openshift-metering get csv
In the following example, the 4.3 Metering Operator installation is successful:
NAME DISPLAY VERSION REPLACES PHASE elasticsearch-operator.4.3.0-202006231303.p0 Elasticsearch Operator 4.3.0-202006231303.p0 Succeeded metering-operator.v4.3.0 Metering 4.3.0 Succeeded
Check that all required Pods in the openshift-metering
namespace are created. This can be done through either the web console or cli.
Many Pods rely on other components to function before they themselves can be considered ready. Some Pods may restart if other Pods take too long to start. This is to be expected during the Metering Operator installation. |
Navigate to Workloads → Pods in the metering namespace and verify that Pods are being created. This can take several minutes after installing the metering stack.
Check that all required Pods in the openshift-metering
namespace are created:
$ oc -n openshift-metering get pods
The output shows that all Pods are created in the Ready
column:
NAME READY STATUS RESTARTS AGE hive-metastore-0 2/2 Running 0 3m28s hive-server-0 3/3 Running 0 3m28s metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s presto-coordinator-0 2/2 Running 0 3m9s reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s
Verify that the ReportDataSources
are beginning to import data, indicated by a valid timestamp in the EARLIEST METRIC
column. This might take several minutes. Filter out the "-raw" ReportDataSources
, which do not import data:
$ oc get reportdatasources -n openshift-metering | grep -v raw
$ oc get reportdatasources -n openshift-metering | grep -v raw NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE node-allocatable-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T18:54:45Z 9m50s node-allocatable-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T18:54:45Z 9m50s node-capacity-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:39Z 9m50s node-capacity-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T18:54:44Z 9m50s persistentvolumeclaim-capacity-bytes 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:43Z 9m50s persistentvolumeclaim-phase 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:28Z 9m50s persistentvolumeclaim-request-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:34Z 9m50s persistentvolumeclaim-usage-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:36Z 9m49s pod-limit-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:26Z 9m49s pod-limit-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:30Z 9m49s pod-persistentvolumeclaim-request-info 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:37Z 9m49s pod-request-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T18:54:24Z 9m49s pod-request-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:32Z 9m49s pod-usage-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T18:54:10Z 9m49s pod-usage-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:20Z 9m49s
After all Pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster.
For more information on configuration steps and available storage platforms, see Configuring persistent storage.
For the steps to configure Hive, see Configuring the Hive metastore.