This is a cache of https://docs.okd.io/latest/backup_and_restore/application_backup_and_restore/release-notes/oadp-1-5-release-notes.html. It is a snapshot of the page at 2025-07-05T19:18:34.682+0000.
OADP 1.5 release notes - OADP Application backup and restore | Backup and restore | OKD 4
×

The release notes for OpenShift API for Data Protection (OADP) describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues.

For additional information about OADP, see OpenShift API for Data Protection (OADP) FAQs

OADP 1.5.0 release notes

The OpenShift API for Data Protection (OADP) 1.5.0 release notes lists resolved issues and known issues.

New features

OADP 1.5.0 introduces a new Self-Service feature

OADP 1.5.0 introduces a new feature named OADP Self-Service, enabling namespace admin users to back up and restore applications on the OKD. In the earlier versions of OADP, you needed the cluster-admin role to perform OADP operations such as backing up and restoring an application, creating a backup storage location, and so on.

From OADP 1.5.0 onward, you do not need the cluster-admin role to perform the backup and restore operations. You can use OADP with the namespace admin role. The namespace admin role has administrator access only to the namespace the user is assigned to. You can use the Self-Service feature only after the cluster administrator installs the OADP Operator and provides the necessary permissions.

Collecting logs with the must-gather tool has been improved with a Markdown summary

You can collect logs, and information about OpenShift API for Data Protection (OADP) custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. This tool generates a Markdown output file with the collected information, which is located in the must-gather logs clusters directory.

dataMoverPrepareTimeout and resourceTimeout parameters are now added to nodeAgent within the DPA

The nodeAgent field in Data Protection Application (DPA) now includes the following parameters:

  • dataMoverPrepareTimeout: Defines the duration the DataUpload or DataDownload process will wait. The default value is 30 minutes.

  • resourceTimeout: Sets the timeout for resource processes not addressed by other specific timeout parameters. The default value is 10 minutes.

Use the spec.configuration.nodeAgent parameter in DPA for configuring nodeAgent daemon set

Velero no longer uses the node-agent-config config map for configuring the nodeAgent daemon set. With this update, you must use the new spec.configuration.nodeAgent parameter in a Data Protection Application (DPA) for configuring the nodeAgent daemon set.

Configuring DPA with the backup repository configuration config map is now possible

With Velero 1.15 and later, you can now configure the total size of a cache per repository. This prevents pods from being removed due to running out of ephemeral storage. See the following new parameters added to the NodeAgentConfig field in DPA:

  • cacheLimitMB: Sets the local data cache size limit in megabytes.

  • fullMaintenanceInterval: The default value is 24 hours. Controls the removal rate of deleted Velero backups from the Kopia repository using the following override options:

    • normalGC: 24 hours

    • fastGC: 12 hours

    • eagerGC: 6 hours

Enhancing the node-agent security

With this update, the following changes are added:

  • A new configuration option is now added to the velero field in DPA.

  • The default value for the disableFsBackup parameter is false or non-existing. With this update, the following options are added to the SecurityContext field:

    • Privileged: true

    • AllowPrivilegeEscalation: true

  • If you set the disableFsBackup parameter to true, it removes the following mounts from the node-agent:

    • host-pods

    • host-plugins

  • Modifies that the node-agent is always run as a non-root user.

  • Changes the root file system to read only.

  • Updates the following mount points with the write access:

    • /home/velero

    • tmp/credentials

  • Uses the SeccompProfileTypeRuntimeDefault option for the SeccompProfile parameter.

Adds DPA support for parallel item backup

By default, only one thread processes an item block. Velero 1.16 supports a parallel item backup, where multiple items within a backup can be processed in parallel.

You can use the optional Velero server parameter --item-block-worker-count to run additional worker threads to process items in parallel. To enable this in OADP, set the dpa.Spec.Configuration.Velero.ItemBlockWorkerCount parameter to an integer value greater than zero.

Running multiple full backups in parallel is not yet supported.

OADP logs are now available in the JSON format

With the of release OADP 1.5.0, the logs are now available in the JSON format. It helps to have pre-parsed data in their Elastic logs management system.

The oc get dpa command now displays RECONCILED status

With this release, the oc get dpa command now displays RECONCILED status instead of displaying only NAME and AGE to improve user experience. For example:

$ oc get dpa -n openshift-adp
NAME            RECONCILED   AGE
velero-sample   True         2m51s

Resolved issues

Containers now use FallbackToLogsOnError for terminationMessagePolicy

With this release, the terminationMessagePolicy field can now set the FallbackToLogsOnError value for the OpenShift API for Data Protection (OADP) Operator containers such as operator-manager, velero, node-agent, and non-admin-controller.

This change ensures that if a container exits with an error and the termination message file is empty, OpenShift uses the last portion of the container logs output as the termination message.

Namespace admin can now access the application after restore

Previously, the namespace admin could not execute an application after the restore operation with the following errors:

  • exec operation is not allowed because the pod’s security context exceeds your permissions

  • unable to validate against any security context constraint

  • not usable by user or serviceaccount, provider restricted-v2

With this update, this issue is now resolved and the namespace admin can access the application successfully after the restore.

Specifying status restoration at the individual resource instance level using the annotation is now possible

Previously, status restoration was only configured at the resource type using the restoreStatus field in the Restore custom resource (CR).

With this release, you can now specify the status restoration at the individual resource instance level using the following annotation:

metadata:
  annotations:
    velero.io/restore-status: "true"
Restore is now successful with excludedClusterScopedResources

Previously, on performing the backup of an application with the excludedClusterScopedResources field set to storageclasses, Namespace parameter, the backup was successful but the restore partially failed. With this update, the restore is successful.

Backup is completed even if it gets restarted during the waitingForPluginOperations phase

Previously, a backup was marked as failed with the following error message:

failureReason: found a backup with status "InProgress" during the server starting,
mark it as "Failed"

With this update, the backup is completed if it gets restarted during the waitingForPluginOperations phase.

Error messages are now more informative when the` disableFsbackup` parameter is set to true in DPA

Previously, when the spec.configuration.velero.disableFsBackup field from a Data Protection Application (DPA) was set to true, the backup partially failed with an error, which was not informative.

This update makes error messages more useful for troubleshooting issues. For example, error messages indicating that disableFsBackup: true is the issue in a DPA or not having access to a DPA if it is for non-administrator users.

Handles AWS STS credentials in the parseAWSSecret

Previously, AWS credentials using STS authentication were not properly validated.

With this update, the parseAWSSecret function detects STS-specific fields, and updates the ensureSecretDataExists function to handle STS profiles correctly.

The repositoryMaintenance job affinity config is available to configure

Previously, the new configurations for repository maintenance job pod affinity was missing from a DPA specification.

With this update, the repositoryMaintenance job affinity config is now available to map a BackupRepository identifier to its configuration.

The ValidationErrors field fades away once the CR specification is correct

Previously, when a schedule CR was created with a wrong spec.schedule value and the same was later patched with a correct value, the ValidationErrors field still existed. Consequently, the ValidationErrors field was displaying incorrect information even though the spec was correct.

With this update, the ValidationErrors field fades away once the CR specification is correct.

The volumeSnapshotContents custom resources are restored when the includedNamesapces field is used in restoreSpec

Previously, when a restore operation was triggered with the includedNamespace field in a restore specification, restore operation was completed successfully but no volumeSnapshotContents custom resources (CR) were created and the PVCs were in a Pending status.

With this update, volumeSnapshotContents CR are restored even when the includedNamesapces field is used in restoreSpec. As a result, an application pod is in a Running state after restore.

OADP operator successfully creates bucket on top of AWS

Previously, the container was configured with the readOnlyRootFilesystem: true setting for security, but the code attempted to create temporary files in the /tmp directory using the os.CreateTemp() function. Consequently, while using the AWS STS authentication with the Cloud Credential Operator (CCO) flow, OADP failed to create temporary files that were required for AWS credential handling with the following error:

ERROR unable to determine if bucket exists. {"error": "open /tmp/aws-shared-credentials1211864681: read-only file system"}

With this update, the following changes are added to address this issue:

  • A new emptyDir volume named tmp-dir to the controller pod specification.

  • A volume mount to the container, which mounts this volume to the /tmp directory.

  • For security best practices, the readOnlyRootFilesystem: true is maintained.

  • Replaced the deprecated ioutil.TempFile() function with the recommended os.CreateTemp() function.

  • Removed the unnecessary io/ioutil import, which is no longer needed.

For a complete list of all issues resolved in this release, see the list of OADP 1.5.0 resolved issues in Jira.

Known issues

Kopia does not delete all the artifacts after backup expiration

Even after deleting a backup, Kopia does not delete the volume artifacts from the ${bucket_name}/kopia/${namespace} on the S3 location after the backup expired. Information related to the expired and removed data files remains in the metadata. To ensure that OpenShift API for Data Protection (OADP) functions properly, the data is not deleted, and it exists in the /kopia/ directory, for example:

  • kopia.repository: Main repository format information such as encryption, version, and other details.

  • kopia.blobcfg: Configuration for how data blobs are named.

  • kopia.maintenance: Tracks maintenance owner, schedule, and last successful build.

  • log: Log blobs.

For a complete list of all known issues in this release, see the list of OADP 1.5.0 known issues in Jira.

Deprecated features

The configuration.restic specification field has been deprecated

With OpenShift API for Data Protection (OADP) 1.5.0, the configuration.restic specification field has been deprecated. Use the nodeAgent section with the uploaderType field for selecting kopia or restic as a uploaderType. Note that, Restic is deprecated in OpenShift API for Data Protection (OADP) 1.5.0.

Technology Preview

Support for HyperShift hosted OpenShift clusters is available as a Technology Preview

OADP can support and facilitate application migrations within HyperShift hosted OpenShift clusters as a Technology Preview. It ensures a seamless backup and restore operation for applications in hosted clusters.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Upgrading OADP 1.4.0 to 1.5.0

Always upgrade to the next minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OADP 1.1 to 1.3, upgrade first to 1.2, and then to 1.3.

Changes from OADP 1.4 to 1.5

The Velero server has been updated from version 1.14 to 1.16.

This changes the following:

Version Support changes

OpenShift API for Data Protection implements a streamlined version support policy. Red Hat supports only one version of OpenShift API for Data Protection (OADP) on one OpenShift version to ensure better stability and maintainability. OADP 1.5.0 is only supported on OpenShift 4.19 version.

OADP Self-Service

OADP 1.5.0 introduces a new feature named OADP Self-Service, enabling namespace admin users to back up and restore applications on the OKD. In the earlier versions of OADP, you needed the cluster-admin role to perform OADP operations such as backing up and restoring an application, creating a backup storage location, and so on.

From OADP 1.5.0 onward, you do not need the cluster-admin role to perform the backup and restore operations. You can use OADP with the namespace admin role. The namespace admin role has administrator access only to the namespace the user is assigned to. You can use the Self-Service feature only after the cluster administrator installs the OADP Operator and provides the necessary permissions.

backupPVC and restorePVC configurations

A backupPVC resource is an intermediate persistent volume claim (PVC) to access data during the data movement backup operation. You create a readonly backup PVC by using the nodeAgent.backupPVC section of the DataProtectionApplication (DPA) custom resource.

A restorePVC resource is an intermediate PVC that is used to write data during the Data Mover restore operation.

You can configure restorePVC in the DPA by using the ignoreDelayBinding field.

Backing up the DPA configuration

You must back up your current DataProtectionApplication (DPA) configuration.

Procedure
  • Save your current DPA configuration by running the following command:

    Example command
    $ oc get dpa -n openshift-adp -o yaml > dpa.orig.backup

Upgrading the OADP Operator

You can upgrade the OpenShift API for Data Protection (OADP) Operator using the following procedure.

Do not install OADP 1.5.0 on a OpenShift 4.18 cluster.

Prerequisites
  • You have installed the latest OADP 1.4.4.

  • You have backed up your data.

Procedure
  1. upgrade OpenShift 4.18 to OpenShift 4.19.

    OpenShift API for Data Protection (OADP) 1.4 is not supported on OpenShift 4.19.

  2. Change your subscription channel for the OADP Operator from stable-1.4 to stable.

  3. Wait for the Operator and containers to update and restart.

Converting DPA to the new version for OADP 1.5.0

The OpenShift API for Data Protection (OADP) 1.4 is not supported on OpenShift 4.19. You can convert Data Protection Application (DPA) to the new OADP 1.5 version by using the new spec.configuration.nodeAgent field and its sub-fields.

Procedure
  1. To configure nodeAgent daemon set, use the spec.configuration.nodeAgent parameter in DPA. See the following example:

    Example DataProtectionApplication configuration
    ...
     spec:
       configuration:
         nodeAgent:
           enable: true
           uploaderType: kopia
    ...
  2. To configure nodeAgent daemon set by using the ConfigMap resource named node-agent-config, see the following example configuration:

    Example config map
    ...
     spec:
       configuration:
         nodeAgent:
           backupPVC:
             ...
           loadConcurrency:
             ...
           podResources:
             ...
           restorePVC:
            ...
    ...

Verifying the upgrade

You can verify the OpenShift API for Data Protection (OADP) upgrade by using the following procedure.

Procedure
  1. Verify that the DataProtectionApplication (DPA) has been reconciled successfully:

    $ oc get dpa dpa-sample -n openshift-adp
    Example output
    NAME            RECONCILED   AGE
    dpa-sample      True         2m51s

    The RECONCILED column must be True.

  2. Verify that the installation finished by viewing the OADP resources by running the following command:

    $ oc get all -n openshift-adp
    Example output
    NAME                                                    READY   STATUS    RESTARTS   AGE
    pod/node-agent-9pjz9                                    1/1     Running   0          3d17h
    pod/node-agent-fmn84                                    1/1     Running   0          3d17h
    pod/node-agent-xw2dg                                    1/1     Running   0          3d17h
    pod/openshift-adp-controller-manager-76b8bc8d7b-kgkcw   1/1     Running   0          3d17h
    pod/velero-64475b8c5b-nh2qc                             1/1     Running   0          3d17h
    
    NAME                                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/openshift-adp-controller-manager-metrics-service   ClusterIP   172.30.194.192   <none>        8443/TCP   3d17h
    service/openshift-adp-velero-metrics-svc                   ClusterIP   172.30.190.174   <none>        8085/TCP   3d17h
    
    NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/node-agent   3         3         3       3            3           <none>          3d17h
    
    NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/openshift-adp-controller-manager   1/1     1            1           3d17h
    deployment.apps/velero                             1/1     1            1           3d17h
    
    NAME                                                          DESIRED   CURRENT   READY   AGE
    replicaset.apps/openshift-adp-controller-manager-76b8bc8d7b   1         1         1       3d17h
    replicaset.apps/openshift-adp-controller-manager-85fff975b8   0         0         0       3d17h
    replicaset.apps/velero-64475b8c5b                             1         1         1       3d17h
    replicaset.apps/velero-8b5bc54fd                              0         0         0       3d17h
    replicaset.apps/velero-f5c9ffb66                              0         0         0       3d17h

    The node-agent pods are created only while using restic or kopia in DataProtectionApplication (DPA). In OADP 1.4.0 and OADP 1.3.0 version, the node-agent pods are labeled as restic.

  3. Verify the backup storage location and confirm that the PHASE is Available by running the following command:

    $ oc get backupstoragelocations.velero.io -n openshift-adp
    Example output
    NAME           PHASE       LAST VALIDATED   AGE     DEFAULT
    dpa-sample-1   Available   1s               3d16h   true