This is a cache of https://docs.okd.io/4.15/registry/configuring_registry_storage/configuring-registry-storage-rhodf.html. It is a snapshot of the page at 2024-11-23T20:48:05.772+0000.
Configuring the registry for OpenShift Data Foundation - Setting up and configuring the registry | Registry | OKD 4.15
×

To configure the OpenShift image registry on bare metal and vSphere to use Red Hat OpenShift Data Foundation storage, you must install OpenShift Data Foundation and then configure image registry using Ceph or Noobaa.

Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry:

  • Ceph, a shared and distributed file system and on-premises object storage

  • NooBaa, providing a Multicloud Object Gateway

This document outlines the procedure to configure the image registry to use Ceph RGW storage.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have access to the OKD web console.

  • You installed the oc CLI.

  • You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage.

Procedure
  1. Create the object bucket claim using the ocs-storagecluster-ceph-rgw storage class. For example:

    cat <<EOF | oc apply -f -
    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: rgwbucket
      namespace: openshift-storage (1)
    spec:
      storageClassName: ocs-storagecluster-ceph-rgw
      generateBucketName: rgwbucket
    EOF
    1 Alternatively, you can use the openshift-image-registry namespace.
  2. Get the bucket name by entering the following command:

    $ bucket_name=$(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')
  3. Get the AWS credentials by entering the following commands:

    $ AWS_ACCESS_KEY_ID=$(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)
    $ AWS_SECRET_ACCESS_KEY=$(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)
  4. Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command:

    $ oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=${AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=${AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry
  5. Get the route host by entering the following command:

    $ route_host=$(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')
  6. Create a config map that uses an ingress certificate by entering the following commands:

    $ oc extract secret/router-certs-default  -n openshift-ingress  --confirm
    $ oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt  -n openshift-config
  7. Configure the image registry to use the Ceph RGW object storage by entering the following command:

    $ oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"${bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://${route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge

Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry:

  • Ceph, a shared and distributed file system and on-premises object storage

  • NooBaa, providing a Multicloud Object Gateway

This document outlines the procedure to configure the image registry to use Noobaa storage.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have access to the OKD web console.

  • You installed the oc CLI.

  • You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage.

Procedure
  1. Create the object bucket claim using the openshift-storage.noobaa.io storage class. For example:

    cat <<EOF | oc apply -f -
    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: noobaatest
      namespace: openshift-storage (1)
    spec:
      storageClassName: openshift-storage.noobaa.io
      generateBucketName: noobaatest
    EOF
    1 Alternatively, you can use the openshift-image-registry namespace.
  2. Get the bucket name by entering the following command:

    $ bucket_name=$(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')
  3. Get the AWS credentials by entering the following commands:

    $ AWS_ACCESS_KEY_ID=$(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print $2}' | base64 --decode)
    $ AWS_SECRET_ACCESS_KEY=$(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print $2}' | base64 --decode)
  4. Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command:

    $ oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=${AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=${AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry
  5. Get the route host by entering the following command:

    $ route_host=$(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')
  6. Create a config map that uses an ingress certificate by entering the following commands:

    $ oc extract secret/$(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm
    $ oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt  -n openshift-config
  7. Configure the image registry to use the Nooba object storage by entering the following command:

    $ oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"${bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://${route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge

Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry:

  • Ceph, a shared and distributed file system and on-premises object storage

  • NooBaa, providing a Multicloud Object Gateway

This document outlines the procedure to configure the image registry to use CephFS storage.

CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have access to the OKD web console.

  • You installed the oc CLI.

  • You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage.

Procedure
  1. Create a PVC to use the cephfs storage class. For example:

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
     name: registry-storage-pvc
     namespace: openshift-image-registry
    spec:
     accessModes:
     - ReadWriteMany
     resources:
       requests:
         storage: 100Gi
     storageClassName: ocs-storagecluster-cephfs
    EOF
  2. Configure the image registry to use the CephFS file system storage by entering the following command:

    $ oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge