This is a cache of https://docs.openshift.com/container-platform/4.3/metering/configuring_metering/metering-configure-persistent-storage.html. It is a snapshot of the page at 2024-11-23T02:08:34.504+0000.
Configuring persistent storage - Configuring metering | Metering | OpenShift Container Platform 4.3
×

Metering requires persistent storage to persist data collected by the metering-operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation.

Storing data in Amazon S3

Metering can use an existing Amazon S3 bucket or create a bucket for storage.

Metering does not manage or delete any S3 bucket data. When uninstalling metering, any S3 buckets used to store metering data must be manually cleaned up.

To use Amazon S3 for storage, edit the spec.storage section in the example s3-storage.yaml file below.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  storage:
    type: "hive"
    hive:
      type: "s3"
      s3:
        bucket: "bucketname/path/" (1)
        region: "us-west-1" (2)
        secretName: "my-aws-secret" (3)
        # Set to false if you want to provide an existing bucket, instead of
        # having metering create the bucket on your behalf.
        createBucket: true (4)
1 Specify the name of the bucket where you would like to store your data. You may optionally specify the path within the bucket.
2 Specify the region of your bucket.
3 The name of a secret in the metering namespace containing the AWS credentials in the data.aws-access-key-id and data.aws-secret-access-key fields. See the examples that follow for more details.
4 Set this field to false if you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that have CreateBucket permissions.

Use the example secret below as a template.

The values of the aws-access-key-id and aws-secret-access-key must be base64 encoded.

apiVersion: v1
kind: secret
metadata:
  name: your-aws-secret
data:
  aws-access-key-id: "dGVzdAo="
  aws-secret-access-key: "c2VjcmV0Cg=="

You can use the following command to create the secret.

This command automatically base64 encodes your aws-access-key-id and aws-secret-access-key values.

oc create secret -n openshift-metering generic your-aws-secret --from-literal=aws-access-key-id=your-access-key  --from-literal=aws-secret-access-key=your-secret-key

The aws-access-key-id and aws-secret-access-key credentials must have read and write access to the bucket. For an example of an IAM policy granting the required permissions, see the aws/read-write.json file below.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:HeadBucket",
                "s3:ListBucket",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::operator-metering-data/*",
                "arn:aws:s3:::operator-metering-data"
            ]
        }
    ]
}

If you left spec.storage.hive.s3.createBucket set to true, or unset, then you should use the aws/read-write-create.json file below, which contains permissions for creating and deleting buckets.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Action": [
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:HeadBucket",
                "s3:ListBucket",
                "s3:CreateBucket",
                "s3:DeleteBucket",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::operator-metering-data/*",
                "arn:aws:s3:::operator-metering-data"
            ]
        }
    ]
}

Storing data in S3-compatible storage

To use S3-compatible storage such as Noobaa, edit the spec.storage section in the example s3-compatible-storage.yaml file below.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  storage:
    type: "hive"
    hive:
      type: "s3Compatible"
      s3Compatible:
        bucket: "bucketname" (1)
        endpoint: "http://example:port-number" (2)
        secretName: "my-aws-secret" (3)
1 Specify the name of your S3-compatible bucket.
2 Specify the endpoint for your storage.
3 The name of a secret in the metering namespace containing the AWS credentials in the data.aws-access-key-id and data.aws-secret-access-key fields. See the example that follows for more details.

Use the example secret below as a template.

apiVersion: v1
kind: secret
metadata:
  name: your-aws-secret
data:
  aws-access-key-id: "dGVzdAo="
  aws-secret-access-key: "c2VjcmV0Cg=="

Storing data in Microsoft Azure

To store data in Azure blob storage you must use an existing container. Edit the spec.storage section in the example azure-blob-storage.yaml file below.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  storage:
    type: "hive"
    hive:
      type: "azure"
      azure:
        container: "bucket1" (1)
        secretName: "my-azure-secret" (2)
        rootDirectory: "/testDir" (3)
1 Specify the container name.
2 Specify a secret in the metering namespace. See the examples that follow for more details.
3 You can optionally specify the directory where you would like to store your data.

Use the example secret below as a template.

apiVersion: v1
kind: secret
metadata:
  name: your-azure-secret
data:
  azure-storage-account-name: "dGVzdAo="
  azure-secret-access-key: "c2VjcmV0Cg=="

You can use the following command to create the secret.

oc create secret -n openshift-metering generic your-azure-secret --from-literal=azure-storage-account-name=your-storage-account-name --from-literal=azure-secret-access-key=your-secret-key

Storing data in Google Cloud Storage

To store your data in Google Cloud Storage you must use an existing bucket. Edit the spec.storage section in the example gcs-storage.yaml file below.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  storage:
    type: "hive"
    hive:
      type: "gcs"
      gcs:
        bucket: "metering-gcs/test1" (1)
        secretName: "my-gcs-secret" (2)
1 Specify the name of the bucket. You can optionally specify the directory within the bucket where you would like to store your data.
2 Specify a secret in the metering namespace. Use the example that follows for more details.

Use the example secret below as a template:

apiVersion: v1
kind: secret
metadata:
  name: your-gcs-secret
data:
  gcs-service-account.json: "c2VjcmV0Cg=="

You can use the following command to create the secret.

oc create secret -n openshift-metering generic your-gcs-secret --from-file gcs-service-account.json=/path/to/your/service-account-key.json

Storing data in shared volumes

NFS is not recommended to use with metering.

Metering has no storage by default, but it can use any ReadWriteMany PersistentVolume or any StorageClass that provisions a ReadWriteMany PersistentVolume.

Procedure
  • To use a ReadWriteMany PersistentVolume for storage, modify the shared-storage.yaml file below.

apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
  name: "operator-metering"
spec:
  storage:
    type: "hive"
    hive:
      type: "sharedPVC"
      sharedPVC:
        claimName: "metering-nfs" (1)
        # uncomment the lines below to provision a new PVC using the specified (2)
        # storageClass.
        # createPVC: true
        # storageClass: "my-nfs-storage-class"
        # size: 5Gi

Select one of the configuration options below:

1 Set storage.hive.sharedPVC.claimName to the name of an existing ReadWriteMany PersistentVolumeClaim (PVC). This is necessary if you do not have dynamic volume provisioning or want to have more control over how the PersistentVolume is created.
2 Set storage.hive.sharedPVC.createPVC to true and set the storage.hive.sharedPVC.storageClass to the name of a StorageClass with ReadWriteMany access mode. This will use dynamic volume provisioning to have a volume created automatically.