This is a cache of https://docs.okd.io/latest/backup_and_restore/application_backup_and_restore/installing/installing-oadp-kubevirt.html. It is a snapshot of the page at 2025-09-30T19:26:34.856+0000.
Configuring OADP with OpenShift Virtualization - OADP Application backup and restore | Backup and restore | OKD 4
×

You can install the OpenShift API for Data Protection (OADP) with OKD Virtualization by installing the OADP Operator and configuring a backup location. Then, you can install the Data Protection Application.

Back up and restore virtual machines by using the OpenShift API for Data Protection.

OpenShift API for Data Protection with OKD Virtualization supports the following backup and restore storage options:

  • Container Storage Interface (CSI) backups

  • Container Storage Interface (CSI) backups with DataMover

The following storage options are excluded:

  • File system backup and restore

  • Volume snapshot backups and restores

To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details.

Installing and configuring OADP with OKD Virtualization

As a cluster administrator, you install OADP by installing the OADP Operator.

The latest version of the OADP Operator installs Velero 1.16.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role.

Procedure
  1. Install the OADP Operator according to the instructions for your storage provider.

  2. Install the Data Protection Application (DPA) with the kubevirt and openshift OADP plugins.

  3. Back up virtual machines by creating a Backup custom resource (CR).

    Red Hat support is limited to only the following options:

    • CSI backups

    • CSI backups with DataMover.

You restore the Backup CR by creating a Restore CR.

Installing the Data Protection Application

You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.

Prerequisites
  • You must install the OADP Operator.

  • You must configure object storage as a backup location.

  • If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.

  • If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials.

    If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret, the installation will fail.

Procedure
  1. Click OperatorsInstalled Operators and select the OADP Operator.

  2. Under Provided APIs, click Create instance in the DataProtectionApplication box.

  3. Click YAML View and update the parameters of the DataProtectionApplication manifest:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
      namespace: openshift-adp (1)
    spec:
      configuration:
        velero:
          defaultPlugins:
            - kubevirt (2)
            - gcp (3)
            - csi (4)
            - openshift (5)
          resourceTimeout: 10m (6)
        nodeAgent: (7)
          enable: true (8)
          uploaderType: kopia (9)
          podConfig:
            nodeSelector: <node_selector> (10)
      backupLocations:
        - velero:
            provider: gcp (11)
            default: true
            credential:
              key: cloud
              name: <default_secret> (12)
            objectStorage:
              bucket: <bucket_name> (13)
              prefix: <prefix> (14)
    1 The default namespace for OADP is openshift-adp. The namespace is a variable and is configurable.
    2 The kubevirt plugin is mandatory for OKD Virtualization.
    3 Specify the plugin for the backup provider, for example, gcp, if it exists.
    4 The csi plugin is mandatory for backing up PVs with CSI snapshots. The csi plugin uses the Velero CSI beta snapshot APIs. You do not need to configure a snapshot location.
    5 The openshift plugin is mandatory.
    6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
    7 The administrative agent that routes the administrative requests to servers.
    8 Set this value to true if you want to enable nodeAgent and perform File System Backup.
    9 Enter kopia as your uploader to use the Built-in DataMover. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR.
    10 Specify the nodes on which Kopia are available. By default, Kopia runs on all nodes.
    11 Specify the backup provider.
    12 Specify the correct default name for the Secret, for example, cloud-credentials-gcp, if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used.
    13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
    14 Specify a prefix for Velero backups, for example, velero, if the bucket is used for multiple purposes.
  4. Click Create.

Verification
  1. Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:

    $ oc get all -n openshift-adp
    Example output
    NAME                                                     READY   STATUS    RESTARTS   AGE
    pod/oadp-operator-controller-manager-67d9494d47-6l8z8    2/2     Running   0          2m8s
    pod/node-agent-9cq4q                                     1/1     Running   0          94s
    pod/node-agent-m4lts                                     1/1     Running   0          94s
    pod/node-agent-pv4kr                                     1/1     Running   0          95s
    pod/velero-588db7f655-n842v                              1/1     Running   0          95s
    
    NAME                                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/oadp-operator-controller-manager-metrics-service   ClusterIP   172.30.70.140    <none>        8443/TCP   2m8s
    service/openshift-adp-velero-metrics-svc                   ClusterIP   172.30.10.0      <none>        8085/TCP   8h
    
    NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/node-agent    3         3         3       3            3           <none>          96s
    
    NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/oadp-operator-controller-manager    1/1     1            1           2m9s
    deployment.apps/velero                              1/1     1            1           96s
    
    NAME                                                           DESIRED   CURRENT   READY   AGE
    replicaset.apps/oadp-operator-controller-manager-67d9494d47    1         1         1       2m9s
    replicaset.apps/velero-588db7f655                              1         1         1       96s
  2. Verify that the DataProtectionApplication (DPA) is reconciled by running the following command:

    $ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
    Example output
    {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
  3. Verify the type is set to Reconciled.

  4. Verify the backup storage location and confirm that the PHASE is Available by running the following command:

    $ oc get backupstoragelocations.velero.io -n openshift-adp
    Example output
    NAME           PHASE       LAST VALIDATED   AGE     DEFAULT
    dpa-sample-1   Available   1s               3d16h   true

If you run a backup of a Microsoft Windows virtual machine (VM) immediately after the VM reboots, the backup might fail with a PartiallyFailed error. This is because, immediately after a VM boots, the Microsoft Windows Volume Shadow Copy Service (VSS) and Guest Agent (GA) service are not ready. The VSS and GA service being unready causes the backup to fail. In such a case, retry the backup a few minutes after the VM boots.

Backing up a single VM

If you have a namespace with multiple virtual machines (VMs), and want to back up only one of them, you can use the label selector to filter the VM that needs to be included in the backup. You can filter the VM by using the app: vmname label.

Prerequisites
  • You have installed the OADP Operator.

  • You have multiple VMs running in a namespace.

  • You have added the kubevirt plugin in the DataProtectionApplication (DPA) custom resource (CR).

  • You have configured the BackupStorageLocation CR in the DataProtectionApplication CR and BackupStorageLocation is available.

Procedure
  1. Configure the Backup CR as shown in the following example:

    Example Backup CR
    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: vmbackupsingle
      namespace: openshift-adp
    spec:
      snapshotMoveData: true
      includedNamespaces:
      - <vm_namespace> (1)
      labelSelector:
        matchLabels:
          app: <vm_app_name> (2)
      storageLocation: <backup_storage_location_name> (3)
    1 Specify the name of the namespace where you have created the VMs.
    2 Specify the VM name that needs to be backed up.
    3 Specify the name of the BackupStorageLocation CR.
  2. To create a Backup CR, run the following command:

    $ oc apply -f <backup_cr_file_name> (1)
    1 Specify the name of the Backup CR file.

Restoring a single VM

After you have backed up a single virtual machine (VM) by using the label selector in the Backup custom resource (CR), you can create a Restore CR and point it to the backup. This restore operation restores a single VM.

Prerequisites
  • You have installed the OADP Operator.

  • You have backed up a single VM by using the label selector.

Procedure
  1. Configure the Restore CR as shown in the following example:

    Example Restore CR
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: vmrestoresingle
      namespace: openshift-adp
    spec:
      backupName: vmbackupsingle (1)
      restorePVs: true
    1 Specifies the name of the backup of a single VM.
  2. To restore the single VM, run the following command:

    $ oc apply -f <restore_cr_file_name> (1)
    1 Specify the name of the Restore CR file.

    When you restore a backup of VMs, you might notice that the Ceph storage capacity allocated for the restore is higher than expected. This behavior is observed only during the kubevirt restore and if the volume type of the VM is block.

    Use the rbd sparsify tool to reclaim space on target volumes. For more details, see Reclaiming space on target volumes.

Restoring a single VM from a backup of multiple VMs

If you have a backup containing multiple virtual machines (VMs), and you want to restore only one VM, you can use the LabelSelectors section in the Restore CR to select the VM to restore. To ensure that the persistent volume claim (PVC) attached to the VM is correctly restored, and the restored VM is not stuck in a Provisioning status, use both the app: <vm_name> and the kubevirt.io/created-by labels. To match the kubevirt.io/created-by label, use the UID of DataVolume of the VM.

Prerequisites
  • You have installed the OADP Operator.

  • You have labeled the VMs that need to be backed up.

  • You have a backup of multiple VMs.

Procedure
  1. Before you take a backup of many VMs, ensure that the VMs are labeled by running the following command:

    $ oc label vm <vm_name> app=<vm_name> -n openshift-adp
  2. Configure the label selectors in the Restore CR as shown in the following example:

    Example Restore CR
    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: singlevmrestore
      namespace: openshift-adp
    spec:
      backupName: multiplevmbackup
      restorePVs: true
      LabelSelectors:
        - matchLabels:
            kubevirt.io/created-by: <datavolume_uid> (1)
        - matchLabels:
            app: <vm_name> (2)
    1 Specify the UID of DataVolume of the VM that you want to restore. For example, b6…​53a-ddd7-4d9d-9407-a0c…​e5.
    2 Specify the name of the VM that you want to restore. For example, test-vm.
  3. To restore a VM, run the following command:

    $ oc apply -f <restore_cr_file_name> (1)
    1 Specify the name of the Restore CR file.

Configuring the DPA with client burst and QPS settings

The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.

You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.

Prerequisites
  • You have installed the OADP Operator.

Procedure
  • Configure the client-burst and the client-qps fields in the DPA as shown in the following example:

    Example Data Protection Application
    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: test-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
        - name: default
          velero:
            config:
              insecureSkipTLSVerify: "true"
              profile: "default"
              region: <bucket_region>
              s3ForcePathStyle: "true"
              s3Url: <bucket_url>
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: velero
            provider: aws
      configuration:
        nodeAgent:
          enable: true
          uploaderType: restic
        velero:
          client-burst: 500 (1)
          client-qps: 300 (2)
          defaultPlugins:
            - openshift
            - aws
            - kubevirt
    1 Specify the client-burst value. In this example, the client-burst field is set to 500.
    2 Specify the client-qps value. In this example, the client-qps field is set to 300.

Configuring the node agent as a non-root and non-privileged user

To enhance the node agent security, you can configure the OADP Operator node agent daemonset to run as a non-root and non-privileged user by using the spec.configuration.velero.disableFsBackup setting in the DataProtectionApplication (DPA) custom resource (CR).

By setting the spec.configuration.velero.disableFsBackup setting to true, the node agent security context sets the root file system to read-only and sets the privileged flag to false.

Setting spec.configuration.velero.disableFsBackup to true enhances the node agent security by removing the need for privileged containers and enforcing a read-only root file system.

However, it also disables File System Backup (FSB) with Kopia. If your workloads rely on FSB for backing up volumes that do not support native snapshots, then you should evaluate whether the disableFsBackup configuration fits your use case.

Prerequisites
  • You have installed the OADP Operator.

Procedure
  • Configure the disableFsBackup field in the DPA as shown in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: ts-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
      - velero:
          credential:
            key: cloud
            name: cloud-credentials
          default: true
          objectStorage:
            bucket: <bucket_name>
            prefix: velero
          provider: gcp
      configuration:
        nodeAgent: (1)
          enable: true
          uploaderType: kopia
        velero:
          defaultPlugins:
          - csi
          - gcp
          - openshift
          disableFsBackup: true (2)
    1 Enable the node agent in the DPA.
    2 Set the disableFsBackup field to true.
Verification
  1. Verify that the node agent security context is set to run as non-root and the root file system is readOnly by running the following command:

    $ oc get daemonset node-agent -o yaml

    The example output is as following:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      ...
      name: node-agent
      namespace: openshift-adp
      ...
    spec:
      ...
      template:
        metadata:
          ...
        spec:
          containers:
          ...
            securityContext:
              allowPrivilegeEscalation: false (1)
              capabilities:
                drop:
                - ALL
              privileged: false (2)
              readOnlyRootFilesystem: true (3)
            ...
          nodeSelector:
            kubernetes.io/os: linux
          os:
            name: linux
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext:
            runAsNonRoot: true (4)
            seccompProfile:
              type: RuntimeDefault
          serviceAccount: velero
          serviceAccountName: velero
          ....
    1 The allowPrivilegeEscalation field is false.
    2 The privileged field is false.
    3 The root file system is read-only.
    4 The node agent is run as a non-root user.

Configuring node agents and node labels

The Data Protection Application (DPA) uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the recommended form of node selection constraint.

Procedure
  1. Run the node agent on any node that you choose by adding a custom label:

    $ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""

    Any label specified must match the labels on each node.

  2. Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector field, which you used for labeling nodes:

    configuration:
      nodeAgent:
        enable: true
        podConfig:
          nodeSelector:
            node-role.kubernetes.io/nodeAgent: ""

    The following example is an anti-pattern of nodeSelector and does not work unless both labels, node-role.kubernetes.io/infra: "" and node-role.kubernetes.io/worker: "", are on the node:

        configuration:
          nodeAgent:
            enable: true
            podConfig:
              nodeSelector:
                node-role.kubernetes.io/infra: ""
                node-role.kubernetes.io/worker: ""

Configuring node agent load affinity

You can schedule the node agent pods on specific nodes by using the spec.podConfig.nodeSelector object of the DataProtectionApplication (DPA) custom resource (CR).

See the following example in which you can schedule the node agent pods on nodes with the label label.io/role: cpu-1 and other-label.io/other-role: cpu-2.

...
spec:
  configuration:
    nodeAgent:
      enable: true
      uploaderType: kopia
      podConfig:
        nodeSelector:
          label.io/role: cpu-1
          other-label.io/other-role: cpu-2
        ...

You can add more restrictions on the node agent pods scheduling by using the nodeagent.loadAffinity object in the DPA spec.

Prerequisites
  • You must be logged in as a user with cluster-admin privileges.

  • You have installed the OADP Operator.

  • You have configured the DPA CR.

Procedure
  • Configure the DPA spec nodegent.loadAffinity object as shown in the following example.

    In the example, you ensure that the node agent pods are scheduled only on nodes with the label label.io/role: cpu-1 and the label label.io/hostname matching with either node1 or node2.

    ...
    spec:
      configuration:
        nodeAgent:
          enable: true
          loadAffinity: (1)
            - nodeSelector:
                matchLabels:
                  label.io/role: cpu-1
                matchExpressions: (2)
                  - key: label.io/hostname
                    operator: In
                    values:
                      - node1
                      - node2
                      ...
    1 Configure the loadAffinity object by adding the matchLabels and matchExpressions objects.
    2 Configure the matchExpressions object to add restrictions on the node agent pods scheduling.

Node agent load affinity guidelines

Use the following guidelines to configure the node agent loadAffinity object in the DataProtectionApplication (DPA) custom resource (CR).

  • Use the spec.nodeagent.podConfig.nodeSelector object for simple node matching.

  • Use the loadAffinity.nodeSelector object without the podConfig.nodeSelector object for more complex scenarios.

  • You can use both podConfig.nodeSelector and loadAffinity.nodeSelector objects, but the loadAffinity object must be equal or more restrictive as compared to the podConfig object. In this scenario, the podConfig.nodeSelector labels must be a subset of the labels used in the loadAffinity.nodeSelector object.

  • You cannot use the matchExpressions and matchLabels fields if you have configured both podConfig.nodeSelector and loadAffinity.nodeSelector objects in the DPA.

  • See the following example to configure both podConfig.nodeSelector and loadAffinity.nodeSelector objects in the DPA.

    ...
    spec:
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
          loadAffinity:
            - nodeSelector:
                matchLabels:
                  label.io/location: 'US'
                  label.io/gpu: 'no'
          podConfig:
            nodeSelector:
              label.io/gpu: 'no'

Configuring node agent load concurrency

You can control the maximum number of node agent operations that can run simultaneously on each node within your cluster.

You can configure it using one of the following fields of the Data Protection Application (DPA):

  • globalConfig: Defines a default concurrency limit for the node agent across all nodes.

  • perNodeConfig: Specifies different concurrency limits for specific nodes based on nodeSelector labels. This provides flexibility for environments where certain nodes might have different resource capacities or roles.

Prerequisites
  • You must be logged in as a user with cluster-admin privileges.

Procedure
  1. If you want to use load concurrency for specific nodes, add labels to those nodes:

    $ oc label node/<node_name> label.io/instance-type='large'
  2. Configure the load concurrency fields for your DPA instance:

      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
          loadConcurrency:
            globalConfig: 1 (1)
            perNodeConfig:
            - nodeSelector:
                  matchLabels:
                     label.io/instance-type: large (2)
              number: 3 (3)
    1 Global concurrent number. The default value is 1, which means there is no concurrency and only one load is allowed. The globalConfig value does not have a limit.
    2 Label for per-node concurrency.
    3 Per-node concurrent number. You can specify many per-node concurrent numbers, for example, based on the instance type and size. The range of per-node concurrent number is the same as the global concurrent number. If the configuration file contains a per-node concurrent number and a global concurrent number, the per-node concurrent number takes precedence.

Configuring repository maintenance

OADP repository maintenance is a background job, you can configure it independently of the node agent pods. This means that you can schedule the repository maintenance pod on a node where the node agent is or is not running.

You can use the repository maintenance job affinity configurations in the DataProtectionApplication (DPA) custom resource (CR) only if you use Kopia as the backup repository.

You have the option to configure the load affinity at the global level affecting all repositories. Or you can configure the load affinity for each repository. You can also use a combination of global and per-repository configuration.

Prerequisites
  • You must be logged in as a user with cluster-admin privileges.

  • You have installed the OADP Operator.

  • You have configured the DPA CR.

Procedure
  • Configure the loadAffinity object in the DPA spec by using either one or both of the following methods:

    • Global configuration: Configure load affinity for all repositories as shown in the following example:

      ...
      spec:
        configuration:
          repositoryMaintenance: (1)
            global: (2)
              podResources:
                cpuRequest: "100m"
                cpuLimit: "200m"
                memoryRequest: "100Mi"
                memoryLimit: "200Mi"
              loadAffinity:
                - nodeSelector:
                    matchLabels:
                      label.io/gpu: 'no'
                    matchExpressions:
                      - key: label.io/location
                        operator: In
                        values:
                          - US
                          - EU
      1 Configure the repositoryMaintenance object as shown in the example.
      2 Use the global object to configure load affinity for all repositories.
    • Per-repository configuration: Configure load affinity per repository as shown in the following example:

      ...
      spec:
        configuration:
          repositoryMaintenance:
            myrepositoryname: (1)
              loadAffinity:
                - nodeSelector:
                    matchLabels:
                      label.io/cpu: 'yes'
      1 Configure the repositoryMaintenance object for each repository.

Configuring Velero load affinity

With each OADP deployment, there is one Velero pod and its main purpose is to schedule Velero workloads. To schedule the Velero pod, you can use the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DataProtectionApplication (DPA) custom resource (CR) spec.

Use the podConfig.nodeSelector object to assign the Velero pod to specific nodes. You can also configure the velero.loadAffinity object for pod-level affinity and anti-affinity.

The OpenShift scheduler applies the rules and performs the scheduling of the Velero pod deployment.

Prerequisites
  • You must be logged in as a user with cluster-admin privileges.

  • You have installed the OADP Operator.

  • You have configured the DPA CR.

Procedure
  • Configure the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DPA spec as shown in the following examples:

    • velero.podConfig.nodeSelector object configuration:

      ...
      spec:
        configuration:
          velero:
            podConfig:
              nodeSelector:
                some-label.io/custom-node-role: backup-core
    • velero.loadAffinity object configuration:

      ...
      spec:
        configuration:
          velero:
            loadAffinity:
              - nodeSelector:
                  matchLabels:
                    label.io/gpu: 'no'
                  matchExpressions:
                    - key: label.io/location
                      operator: In
                      values:
                        - US
                        - EU

Overriding the imagePullPolicy setting in the DPA

In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.

In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:

  • If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent.

  • If the image does not have the digest, the Operator sets imagePullPolicy to Always.

You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).

Prerequisites
  • You have installed the OADP Operator.

Procedure
  • Configure the spec.imagePullPolicy field in the DPA as shown in the following example:

    Example Data Protection Application
    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: test-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
        - name: default
          velero:
            config:
              insecureSkipTLSVerify: "true"
              profile: "default"
              region: <bucket_region>
              s3ForcePathStyle: "true"
              s3Url: <bucket_url>
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: velero
            provider: aws
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
        velero:
          defaultPlugins:
            - openshift
            - aws
            - kubevirt
            - csi
      imagePullPolicy: Never (1)
    1 Specify the value for imagePullPolicy. In this example, the imagePullPolicy field is set to Never.

About incremental back up support

OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OKD Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover:

Table 1. OADP backup support matrix for containerized workloads
Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover

Filesystem

S [1], I [2]

S [1], I [2]

S [1]

S [1], I [2]

Block

N [3]

N [3]

S [1]

S [1], I [2]

Table 2. OADP backup support matrix for OKD Virtualization workloads
Volume mode FSB - Restic FSB - Kopia CSI CSI Data Mover

Filesystem

N [3]

N [3]

S [1]

S [1], I [2]

Block

N [3]

N [3]

S [1]

S [1], I [2]

  1. Backup supported

  2. Incremental backup supported

  3. Not supported

The CSI Data Mover backups use Kopia regardless of uploaderType.

Red Hat only supports the combination of OADP versions 1.3.0 and later, and OKD Virtualization versions 4.14 and later.

OADP versions before 1.3.0 are not supported for back up and restore of OKD Virtualization.