This is a cache of https://docs.okd.io/4.6/metering/configuring_metering/metering-common-config-options.html. It is a snapshot of the page at 2024-11-15T03:05:59.287+0000.
Common configuration options - Configuring metering | Metering | OKD 4.6
×

Metering is a deprecated feature. Deprecated functionality is still included in OKD and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes.

Resource requests and limits

You can adjust the CPU, memory, or storage resource requests and/or limits for pods and volumes. The default-resource-limits.yaml below provides an example of setting resource request and limits for each component.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  reporting-operator:
    spec:
      resources:
        limits:
          cpu: 1
          memory: 500Mi
        requests:
          cpu: 500m
          memory: 100Mi
  presto:
    spec:
      coordinator:
        resources:
          limits:
            cpu: 4
            memory: 4Gi
          requests:
            cpu: 2
            memory: 2Gi

      worker:
        replicas: 0
        resources:
          limits:
            cpu: 8
            memory: 8Gi
          requests:
            cpu: 4
            memory: 2Gi

  hive:
    spec:
      metastore:
        resources:
          limits:
            cpu: 4
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 650Mi
        storage:
          class: null
          create: true
          size: 5Gi
      server:
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 500Mi

Node selectors

You can run the metering components on specific sets of nodes. Set the nodeSelector on a metering component to control where the component is scheduled. The node-selectors.yaml file below provides an example of setting node selectors for each component.

Add the openshift.io/node-selector: "" namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. Specify "" as the annotation value.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  reporting-operator:
    spec:
      nodeSelector:
        "node-role.kubernetes.io/infra": "" (1)

  presto:
    spec:
      coordinator:
        nodeSelector:
          "node-role.kubernetes.io/infra": "" (1)
      worker:
        nodeSelector:
          "node-role.kubernetes.io/infra": "" (1)
  hive:
    spec:
      metastore:
        nodeSelector:
          "node-role.kubernetes.io/infra": "" (1)
      server:
        nodeSelector:
          "node-role.kubernetes.io/infra": "" (1)
1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use key-value pairs, based on the value specified for the node.

Add the openshift.io/node-selector: "" namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. When the openshift.io/node-selector annotation is set on the project, the value is used in preference to the value of the spec.defaultNodeSelector field in the cluster-wide Scheduler object.

Verification

You can verify the metering node selectors by performing any of the following checks:

  • Verify that all pods for metering are correctly scheduled on the IP of the node that is configured in the MeteringConfig custom resource:

    1. Check all pods in the openshift-metering namespace:

      $ oc --namespace openshift-metering get pods -o wide

      The output shows the NODE and corresponding IP for each pod running in the openshift-metering namespace.

      Example output
      NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE                                         NOMINATED NODE   READINESS GATES
      hive-metastore-0                      1/2     Running   0          4m33s   10.129.2.26   ip-10-0-210-167.us-east-2.compute.internal   <none>           <none>
      hive-server-0                         2/3     Running   0          4m21s   10.128.2.26   ip-10-0-150-175.us-east-2.compute.internal   <none>           <none>
      metering-operator-964b4fb55-4p699     2/2     Running   0          7h30m   10.131.0.33   ip-10-0-189-6.us-east-2.compute.internal     <none>           <none>
      nfs-server                            1/1     Running   0          7h30m   10.129.2.24   ip-10-0-210-167.us-east-2.compute.internal   <none>           <none>
      presto-coordinator-0                  2/2     Running   0          4m8s    10.131.0.35   ip-10-0-189-6.us-east-2.compute.internal     <none>           <none>
      reporting-operator-869b854c78-8g2x5   1/2     Running   0          7h27m   10.128.2.25   ip-10-0-150-175.us-east-2.compute.internal   <none>           <none>
    2. Compare the nodes in the openshift-metering namespace to each node NAME in your cluster:

      $ oc get nodes
      Example output
      NAME                                         STATUS   ROLES    AGE   VERSION
      ip-10-0-147-106.us-east-2.compute.internal   Ready    master   14h   v1.19.0+6025c28
      ip-10-0-150-175.us-east-2.compute.internal   Ready    worker   14h   v1.19.0+6025c28
      ip-10-0-175-23.us-east-2.compute.internal    Ready    master   14h   v1.19.0+6025c28
      ip-10-0-189-6.us-east-2.compute.internal     Ready    worker   14h   v1.19.0+6025c28
      ip-10-0-205-158.us-east-2.compute.internal   Ready    master   14h   v1.19.0+6025c28
      ip-10-0-210-167.us-east-2.compute.internal   Ready    worker   14h   v1.19.0+6025c28
  • Verify that the node selector configuration in the MeteringConfig custom resource does not interfere with the cluster-wide node selector configuration such that no metering operand pods are scheduled.

    • Check the cluster-wide Scheduler object for the spec.defaultNodeSelector field, which shows where pods are scheduled by default:

      $ oc get schedulers.config.openshift.io cluster -o yaml