This is a cache of https://docs.openshift.com/container-platform/4.12/backup_and_restore/application_backup_and_restore/installing/oadp-12-data-mover-ceph-doc.html. It is a snapshot of the page at 2024-11-29T12:36:52.352+0000.
Using OADP 1.2 Data Mover with Ceph storage - OADP Application backup and restore | Backup and restore | OpenShift Container Platform 4.12
×

You can use OADP 1.2 Data Mover to backup and restore application data for clusters that use CephFS, CephRBD, or both.

OADP 1.2 Data Mover leverages Ceph features that support large-scale environments. One of these is the shallow copy method, which is available for OpenShift Container Platform 4.12 and later. This feature supports backing up and restoring StorageClass and AccessMode resources other than what is found on the source persistent volume claim (PVC).

The CephFS shallow copy feature is a back up feature. It is not part of restore operations.

Prerequisites for using OADP 1.2 Data Mover with Ceph storage

The following prerequisites apply to all back up and restore operations of data using OpenShift API for Data Protection (OADP) 1.2 Data Mover in a cluster that uses Ceph storage:

  • You have installed OpenShift Container Platform 4.12 or later.

  • You have installed the OADP Operator.

  • You have created a secret cloud-credentials in the namespace openshift-adp.

  • You have installed Red Hat OpenShift Data Foundation.

  • You have installed the latest VolSync Operator by using Operator Lifecycle Manager.

Defining custom resources for use with OADP 1.2 Data Mover

When you install Red Hat OpenShift Data Foundation, it automatically creates default CephFS and a CephRBD StorageClass and VolumeSnapshotClass custom resources (CRs). You must define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover.

After you define the CRs, you must make several other changes to your environment before you can perform your back up and restore operations.

Defining CephFS custom resources for use with OADP 1.2 Data Mover

When you install Red Hat OpenShift Data Foundation, it automatically creates a default CephFS StorageClass custom resource (CR) and a default CephFS VolumeSnapshotClass CR. You can define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover.

Procedure
  1. Define the VolumeSnapshotClass CR as in the following example:

    Example VolumeSnapshotClass CR
    apiVersion: snapshot.storage.k8s.io/v1
    deletionPolicy: Retain (1)
    driver: openshift-storage.cephfs.csi.ceph.com
    kind: VolumeSnapshotClass
    metadata:
      annotations:
        snapshot.storage.kubernetes.io/is-default-class: true (2)
      labels:
        velero.io/csi-volumesnapshot-class: true (3)
      name: ocs-storagecluster-cephfsplugin-snapclass
    parameters:
      clusterID: openshift-storage
      csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner
      csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage
    1 Must be set to Retain.
    2 Must be set to true.
    3 Must be set to true.
  2. Define the StorageClass CR as in the following example:

    Example StorageClass CR
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: ocs-storagecluster-cephfs
      annotations:
        description: Provides RWO and RWX Filesystem volumes
        storageclass.kubernetes.io/is-default-class: true (1)
    provisioner: openshift-storage.cephfs.csi.ceph.com
    parameters:
      clusterID: openshift-storage
      csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
      csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
      csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
      csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
      csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
      csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
      fsName: ocs-storagecluster-cephfilesystem
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    volumeBindingMode: Immediate
    1 Must be set to true.

Defining CephRBD custom resources for use with OADP 1.2 Data Mover

When you install Red Hat OpenShift Data Foundation, it automatically creates a default CephRBD StorageClass custom resource (CR) and a default CephRBD VolumeSnapshotClass CR. You can define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover.

Procedure
  1. Define the VolumeSnapshotClass CR as in the following example:

    Example VolumeSnapshotClass CR
    apiVersion: snapshot.storage.k8s.io/v1
    deletionPolicy: Retain (1)
    driver: openshift-storage.rbd.csi.ceph.com
    kind: VolumeSnapshotClass
    metadata:
      labels:
        velero.io/csi-volumesnapshot-class: true (2)
      name: ocs-storagecluster-rbdplugin-snapclass
    parameters:
      clusterID: openshift-storage
      csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
      csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage
    1 Must be set to Retain.
    2 Must be set to true.
  2. Define the StorageClass CR as in the following example:

    Example StorageClass CR
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: ocs-storagecluster-ceph-rbd
      annotations:
        description: 'Provides RWO Filesystem volumes, and RWO and RWX Block volumes'
    provisioner: openshift-storage.rbd.csi.ceph.com
    parameters:
      csi.storage.k8s.io/fstype: ext4
      csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
      csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
      csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
      csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
      imageFormat: '2'
      clusterID: openshift-storage
      imageFeatures: layering
      csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
      pool: ocs-storagecluster-cephblockpool
      csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    volumeBindingMode: Immediate

Defining additional custom resources for use with OADP 1.2 Data Mover

After you redefine the default StorageClass and CephRBD VolumeSnapshotClass custom resources (CRs), you must create the following CRs:

  • A CephFS StorageClass CR defined to use the shallow copy feature

  • A Restic Secret CR

Procedure
  1. Create a CephFS StorageClass CR and set the backingSnapshot parameter set to true as in the following example:

    Example CephFS StorageClass CR with backingSnapshot set to true
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: ocs-storagecluster-cephfs-shallow
      annotations:
        description: Provides RWO and RWX Filesystem volumes
        storageclass.kubernetes.io/is-default-class: false
    provisioner: openshift-storage.cephfs.csi.ceph.com
    parameters:
      csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
      csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
      csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
      csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
      clusterID: openshift-storage
      fsName: ocs-storagecluster-cephfilesystem
      csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
      backingSnapshot: true (1)
      csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
    reclaimPolicy: Delete
    allowVolumeExpansion: true
    volumeBindingMode: Immediate
    1 Must be set to true.

    Ensure that the CephFS VolumeSnapshotClass and StorageClass CRs have the same value for provisioner.

  2. Configure a Restic Secret CR as in the following example:

    Example Restic Secret CR
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret_name>
      namespace: <namespace>
    type: Opaque
    stringData:
      RESTIC_PASSWORD: <restic_password>

Backing up and restoring data using OADP 1.2 Data Mover and CephFS storage

You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using CephFS storage by enabling the shallow copy feature of CephFS.

Prerequisites
  • A stateful application is running in a separate namespace with persistent volume claims (PVCs) using CephFS as the provisioner.

  • The StorageClass and VolumeSnapshotClass custom resources (CRs) are defined for CephFS and OADP 1.2 Data Mover.

  • There is a secret cloud-credentials in the openshift-adp namespace.

Creating a DPA for use with CephFS storage

You must create a Data Protection Application (DPA) CR before you use the OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using CephFS storage.

Procedure
  1. Verify that the deletionPolicy field of the VolumeSnapshotClass CR is set to Retain by running the following command:

    $ oc get volumesnapshotclass -A  -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{"  "}{"Retention Policy: "}{.deletionPolicy}{"\n"}{end}'
  2. Verify that the labels of the VolumeSnapshotClass CR are set to true by running the following command:

    $ oc get volumesnapshotclass -A  -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{"  "}{"labels: "}{.metadata.labels}{"\n"}{end}'
  3. Verify that the storageclass.kubernetes.io/is-default-class annotation of the StorageClass CR is set to true by running the following command:

    $ oc get storageClass -A  -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{"  "}{"annotations: "}{.metadata.annotations}{"\n"}{end}'
  4. Create a Data Protection Application (DPA) CR similar to the following example:

    Example DPA CR
    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: velero-sample
      namespace: openshift-adp
    spec:
      backupLocations:
        - velero:
            config:
              profile: default
              region: us-east-1
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <my_bucket>
              prefix: velero
           provider: aws
        configuration:
          restic:
            enable: false  (1)
          velero:
            defaultPlugins:
              - openshift
              - aws
              - csi
              - vsm
        features:
          dataMover:
            credentialName: <restic_secret_name> (2)
            enable: true (3)
            volumeOptionsForStorageClasses: (4)
              ocs-storagecluster-cephfs:
                sourceVolumeOptions:
                  accessMode: ReadOnlyMany
                  cacheAccessMode: ReadWriteMany
                  cacheStorageClassName: ocs-storagecluster-cephfs
                  storageClassName: ocs-storagecluster-cephfs-shallow
    1 There is no default value for the enable field. Valid values are true or false.
    2 Use the Restic Secret that you created when you prepared your environment for working with OADP 1.2 Data Mover and Ceph. If you do not use your Restic Secret, the CR uses the default value dm-credential for this parameter.
    3 There is no default value for the enable field. Valid values are true or false.
    4 Optional parameter. You can define a different set of VolumeOptionsForStorageClass labels for each storageClass volume. This configuration provides a backup for volumes with different providers. The optional VolumeOptionsForStorageClass parameter is typically used with CephFS but can be used for any storage type.

Backing up data using OADP 1.2 Data Mover and CephFS storage

You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up data using CephFS storage by enabling the shallow copy feature of CephFS storage.

Procedure
  1. Create a Backup CR as in the following example:

    Example Backup CR
    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: <backup_name>
      namespace: <protected_ns>
    spec:
      includedNamespaces:
      - <app_ns>
      storageLocation: velero-sample-1
  2. Monitor the progress of the VolumeSnapshotBackup CRs by completing the following steps:

    1. To check the progress of all the VolumeSnapshotBackup CRs, run the following command:

      $ oc get vsb -n <app_ns>
    2. To check the progress of a specific VolumeSnapshotBackup CR, run the following command:

      $ oc get vsb <vsb_name> -n <app_ns> -ojsonpath="{.status.phase}`
  3. Wait several minutes until the VolumeSnapshotBackup CR has the status Completed.

  4. Verify that there is at least one snapshot in the object store that is given in the Restic Secret. You can check for this snapshot in your targeted BackupStorageLocation storage provider that has a prefix of /<OADP_namespace>.

Restoring data using OADP 1.2 Data Mover and CephFS storage

You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to restore data using CephFS storage if the shallow copy feature of CephFS storage was enabled for the back up procedure. The shallow copy feature is not used in the restore procedure.

Procedure
  1. Delete the application namespace by running the following command:

    $ oc delete vsb -n <app_namespace> --all
  2. Delete any VolumeSnapshotContent CRs that were created during backup by running the following command:

    $ oc delete volumesnapshotcontent --all
  3. Create a Restore CR as in the following example:

    Example Restore CR
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: <restore_name>
      namespace: <protected_ns>
    spec:
      backupName: <previous_backup_name>
  4. Monitor the progress of the VolumeSnapshotRestore CRs by doing the following:

    1. To check the progress of all the VolumeSnapshotRestore CRs, run the following command:

      $ oc get vsr -n <app_ns>
    2. To check the progress of a specific VolumeSnapshotRestore CR, run the following command:

      $ oc get vsr <vsr_name> -n <app_ns> -ojsonpath="{.status.phase}
  5. Verify that your application data has been restored by running the following command:

    $ oc get route <route_name> -n <app_ns> -ojsonpath="{.spec.host}"

Backing up and restoring data using OADP 1.2 Data Mover and split volumes (CephFS and Ceph RBD)

You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data in an environment that has split volumes, that is, an environment that uses both CephFS and CephRBD.

Prerequisites
  • A stateful application is running in a separate namespace with persistent volume claims (PVCs) using CephFS as the provisioner.

  • The StorageClass and VolumeSnapshotClass custom resources (CRs) are defined for CephFS and OADP 1.2 Data Mover.

  • There is a secret cloud-credentials in the openshift-adp namespace.

Creating a DPA for use with split volumes

You must create a Data Protection Application (DPA) CR before you use the OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using split volumes.

Procedure
  • Create a Data Protection Application (DPA) CR as in the following example:

    Example DPA CR for environment with split volumes
    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: velero-sample
      namespace: openshift-adp
    spec:
      backupLocations:
        - velero:
            config:
              profile: default
              region: us-east-1
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <my-bucket>
              prefix: velero
            provider: aws
      configuration:
        restic:
          enable: false
        velero:
          defaultPlugins:
            - openshift
            - aws
            - csi
            - vsm
      features:
        dataMover:
          credentialName: <restic_secret_name> (1)
          enable: true
          volumeOptionsForStorageClasses: (2)
            ocs-storagecluster-cephfs:
              sourceVolumeOptions:
                accessMode: ReadOnlyMany
                cacheAccessMode: ReadWriteMany
                cacheStorageClassName: ocs-storagecluster-cephfs
                storageClassName: ocs-storagecluster-cephfs-shallow
            ocs-storagecluster-ceph-rbd:
              sourceVolumeOptions:
                storageClassName: ocs-storagecluster-ceph-rbd
                cacheStorageClassName: ocs-storagecluster-ceph-rbd
              destinationVolumeOptions:
                storageClassName: ocs-storagecluster-ceph-rbd
                cacheStorageClassName: ocs-storagecluster-ceph-rbd
    1 Use the Restic Secret that you created when you prepared your environment for working with OADP 1.2 Data Mover and Ceph. If you do not, then the CR will use the default value dm-credential for this parameter.
    2 A different set of VolumeOptionsForStorageClass labels can be defined for each storageClass volume, thus allowing a backup to volumes with different providers. The VolumeOptionsForStorageClass parameter is meant for use with CephFS. However, the optional VolumeOptionsForStorageClass parameter could be used for any storage type.

Backing up data using OADP 1.2 Data Mover and split volumes

You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up data in an environment that has split volumes.

Procedure
  1. Create a Backup CR as in the following example:

    Example Backup CR
    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: <backup_name>
      namespace: <protected_ns>
    spec:
      includedNamespaces:
      - <app_ns>
      storageLocation: velero-sample-1
  2. Monitor the progress of the VolumeSnapshotBackup CRs by completing the following steps:

    1. To check the progress of all the VolumeSnapshotBackup CRs, run the following command:

      $ oc get vsb -n <app_ns>
    2. To check the progress of a specific VolumeSnapshotBackup CR, run the following command:

      $ oc get vsb <vsb_name> -n <app_ns> -ojsonpath="{.status.phase}`
  3. Wait several minutes until the VolumeSnapshotBackup CR has the status Completed.

  4. Verify that there is at least one snapshot in the object store that is given in the Restic Secret. You can check for this snapshot in your targeted BackupStorageLocation storage provider that has a prefix of /<OADP_namespace>.

Restoring data using OADP 1.2 Data Mover and split volumes

You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to restore data in an environment that has split volumes, if the shallow copy feature of CephFS storage was enabled for the back up procedure. The shallow copy feature is not used in the restore procedure.

Procedure
  1. Delete the application namespace by running the following command:

    $ oc delete vsb -n <app_namespace> --all
  2. Delete any VolumeSnapshotContent CRs that were created during backup by running the following command:

    $ oc delete volumesnapshotcontent --all
  3. Create a Restore CR as in the following example:

    Example Restore CR
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: <restore_name>
      namespace: <protected_ns>
    spec:
      backupName: <previous_backup_name>
  4. Monitor the progress of the VolumeSnapshotRestore CRs by doing the following:

    1. To check the progress of all the VolumeSnapshotRestore CRs, run the following command:

      $ oc get vsr -n <app_ns>
    2. To check the progress of a specific VolumeSnapshotRestore CR, run the following command:

      $ oc get vsr <vsr_name> -n <app_ns> -ojsonpath="{.status.phase}
  5. Verify that your application data has been restored by running the following command:

    $ oc get route <route_name> -n <app_ns> -ojsonpath="{.spec.host}"