$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
You install the OpenShift API for Data Protection (OADP) with OpenShift Data Foundation by installing the OADP Operator and configuring a backup location and a snapshot location. Then, you install the Data Protection Application.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator. |
You can configure Multicloud Object Gateway or any AWS S3-compatible object storage as a backup location.
The For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You create a Secret
for the backup location and then you install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks.
You specify backup and snapshot locations and their secrets in the DataProtectionApplication
custom resource (CR).
You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO.
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass
CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret
.
If the backup and snapshot locations use different credentials, you create two secret objects:
Custom Secret
for the backup location, which you specify in the DataProtectionApplication
CR.
Default Secret
for the snapshot location, which is not referenced in the DataProtectionApplication
CR.
The Data Protection Application requires a default If you do not want to specify backup or snapshot locations during the installation, you can create a default |
You create a default Secret
if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The default name of the Secret
is cloud-credentials
, unless your backup storage provider has a default plugin, such as aws
, azure
, or gcp
. In that case, the default name is specified in the provider-specific OADP installation procedure.
The If you do not want to use the backup location credentials during the installation, you can create a |
Your object storage and cloud storage, if any, must use the same credentials.
You must configure object storage for Velero.
You must create a credentials-velero
file for the object storage in the appropriate format.
Create a Secret
with the default name:
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
The Secret
is referenced in the spec.backupLocations.credential
block of the DataProtectionApplication
CR when you install the Data Protection Application.
If your backup and snapshot locations use different credentials, you must create two Secret
objects:
Backup location Secret
with a custom name. The custom name is specified in the spec.backupLocations
block of the DataProtectionApplication
custom resource (CR).
Snapshot location Secret
with the default name, cloud-credentials
. This Secret
is not specified in the DataProtectionApplication
CR.
Create a credentials-velero
file for the snapshot location in the appropriate format for your cloud provider.
Create a Secret
for the snapshot location with the default name:
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
Create a credentials-velero
file for the backup location in the appropriate format for your object storage.
Create a Secret
for the backup location with a custom name:
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
Add the Secret
with the custom name to the DataProtectionApplication
CR, as in the following example:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: <dpa_sample>
namespace: openshift-adp
spec:
...
backupLocations:
- velero:
provider: <provider>
default: true
credential:
key: cloud
name: <custom_secret> (1)
objectStorage:
bucket: <bucket_name>
prefix: <prefix>
1 | Backup location Secret with custom name. |
You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates.
You set the CPU and memory resource allocations for the Velero
pod by editing the DataProtectionApplication
custom resource (CR) manifest.
You must have the OpenShift API for Data Protection (OADP) Operator installed.
Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations
block of the DataProtectionApplication
CR manifest, as in the following example:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: <dpa_sample>
spec:
# ...
configuration:
velero:
podConfig:
nodeSelector: <node_selector> (1)
resourceAllocations: (2)
limits:
cpu: "1"
memory: 1024Mi
requests:
cpu: 200m
memory: 256Mi
1 | Specify the node selector to be supplied to Velero podSpec. |
2 | The resourceAllocations listed are for average usage. |
Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover. Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly. |
Use the nodeSelector
field to select which nodes can run the node agent. The nodeSelector
field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
For more details, see Configuring node agents and node labels.
The following recommendations are based on observations of performance made in the scale and performance lab. The changes are specifically related to Red Hat OpenShift Data Foundation (ODF). If working with ODF, consult the appropriate tuning guides for official recommendations.
Backup and restore operations require large amounts of CephFS PersistentVolumes
(PVs). To avoid Ceph MDS pods restarting with an out-of-memory
(OOM) error, the following configuration is suggested:
Configuration types | Request | Max limit |
---|---|---|
CPU |
Request changed to 3 |
Max limit to 3 |
Memory |
Request changed to 8 Gi |
Max limit to 128 Gi |
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication
custom resource (CR) manifest to prevent a certificate signed by unknown authority
error.
You must have the OpenShift API for Data Protection (OADP) Operator installed.
Edit the spec.backupLocations.velero.objectStorage.caCert
parameter and spec.backupLocations.velero.config
parameters of the DataProtectionApplication
CR manifest:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: <dpa_sample>
spec:
# ...
backupLocations:
- name: default
velero:
provider: aws
default: true
objectStorage:
bucket: <bucket>
prefix: <prefix>
caCert: <base64_encoded_cert_string> (1)
config:
insecureSkipTLSVerify: "false" (2)
# ...
1 | Specify the Base64-encoded CA certificate string. |
2 | The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. |
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin
role.
You must have the OpenShift CLI (oc
) installed.
To use an aliased Velero command, run the following command:
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
Check that the alias is working by running the following command:
$ velero version
Client:
Version: v1.12.1-OADP
Git commit: -
Server:
Version: v1.12.1-OADP
To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
To fetch the backup logs, run the following command:
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
You can use these logs to view failures and warnings for the resources that you cannot back up.
If the Velero pod restarts, the /tmp/your-cacert.txt
file disappears, and you must re-create the /tmp/your-cacert.txt
file by re-running the commands from the previous step.
You can check if the /tmp/your-cacert.txt
file still exists, in the file location where you stored it, by running the following command:
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt"
/tmp/your-cacert.txt
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication
API.
You must install the OADP Operator.
You must configure object storage as a backup location.
If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
If the backup and snapshot locations use the same credentials, you must create a Secret
with the default name, cloud-credentials
.
If the backup and snapshot locations use different credentials, you must create two secrets
:
Secret
with a custom name for the backup location. You add this Secret
to the DataProtectionApplication
CR.
Secret
with another custom name for the snapshot location. You add this Secret
to the DataProtectionApplication
CR.
If you do not want to specify backup or snapshot locations during the installation, you can create a default |
Velero creates a secret named After you create your DPA, the first time that you run a backup targeted to the backup repository, Velero creates a backup repository whose secret is |
Click Operators → Installed Operators and select the OADP Operator.
Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the DataProtectionApplication
manifest:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: <dpa_sample>
namespace: openshift-adp
spec:
configuration:
velero:
defaultPlugins:
- aws (1)
- kubevirt (2)
- csi (3)
- openshift (4)
resourceTimeout: 10m (5)
restic:
enable: true (6)
podConfig:
nodeSelector: <node_selector> (7)
backupLocations:
- velero:
provider: gcp (8)
default: true
credential:
key: cloud
name: <default_secret> (9)
objectStorage:
bucket: <bucket_name> (10)
prefix: <prefix> (11)
1 | An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is aws . For Azure and GCP object stores, the azure or gcp plugin is required. |
2 | Optional: The kubevirt plugin is used with OpenShift Virtualization. |
3 | Specify the csi default plugin if you use CSI snapshots to back up PVs. The csi plugin uses the Velero CSI beta snapshot APIs. You do not need to configure a snapshot location. |
4 | The openshift plugin is mandatory. |
5 | Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. |
6 | Set this value to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding spec.defaultVolumesToFsBackup: true to the Backup CR. In OADP version 1.1, add spec.defaultVolumesToRestic: true to the Backup CR. |
7 | Specify on which nodes Restic is available. By default, Restic runs on all nodes. |
8 | Specify the backup provider. |
9 | Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. |
10 | Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. |
11 | Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. |
Click Create.
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
$ oc get all -n openshift-adp
NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s
Verify that the DataProtectionApplication
(DPA) is reconciled by running the following command:
$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
Verify the type
is set to Reconciled
.
Verify the backup storage location and confirm that the PHASE
is Available
by running the following command:
$ oc get backupStorageLocation -n openshift-adp
NAME PHASE LAST VALIDATED AGE DEFAULT
dpa-sample-1 Available 1s 3d16h true
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication
API.
You must install the OADP Operator.
You must configure object storage as a backup location.
If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
If the backup and snapshot locations use the same credentials, you must create a Secret
with the default name, cloud-credentials
.
If you do not want to specify backup or snapshot locations during the installation, you can create a default |
Click Operators → Installed Operators and select the OADP Operator.
Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the DataProtectionApplication
manifest:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: <dpa_sample>
namespace: openshift-adp (1)
spec:
configuration:
velero:
defaultPlugins:
- aws (2)
- kubevirt (3)
- csi (4)
- openshift (5)
resourceTimeout: 10m (6)
nodeAgent: (7)
enable: true (8)
uploaderType: kopia (9)
podConfig:
nodeSelector: <node_selector> (10)
backupLocations:
- velero:
provider: gcp (11)
default: true
credential:
key: cloud
name: <default_secret> (12)
objectStorage:
bucket: <bucket_name> (13)
prefix: <prefix> (14)
1 | The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. |
2 | An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is aws . For Azure and GCP object stores, the azure or gcp plugin is required. |
3 | Optional: The kubevirt plugin is used with OpenShift Virtualization. |
4 | Specify the csi default plugin if you use CSI snapshots to back up PVs. The csi plugin uses the Velero CSI beta snapshot APIs. You do not need to configure a snapshot location. |
5 | The openshift plugin is mandatory. |
6 | Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. |
7 | The administrative agent that routes the administrative requests to servers. |
8 | Set this value to true if you want to enable nodeAgent and perform File System Backup. |
9 | Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. |
10 | Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes. |
11 | Specify the backup provider. |
12 | Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. |
13 | Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. |
14 | Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. |
Click Create.
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
$ oc get all -n openshift-adp
NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s
Verify that the DataProtectionApplication
(DPA) is reconciled by running the following command:
$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
Verify the type
is set to Reconciled
.
Verify the backup storage location and confirm that the PHASE
is Available
by running the following command:
$ oc get backupStorageLocation -n openshift-adp
NAME PHASE LAST VALIDATED AGE DEFAULT
dpa-sample-1 Available 1s 3d16h true
The burst setting determines how many requests can be sent to the velero
server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero
server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst
and dpa.configuration.velero.client-qps
fields of the DPA to set the burst and QPS values.
You have installed the OADP Operator.
Configure the client-burst
and the client-qps
fields in the DPA as shown in the following example:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: test-dpa
namespace: openshift-adp
spec:
backupLocations:
- name: default
velero:
config:
insecureSkipTLSVerify: "true"
profile: "default"
region: <bucket_region>
s3ForcePathStyle: "true"
s3Url: <bucket_url>
credential:
key: cloud
name: cloud-credentials
default: true
objectStorage:
bucket: <bucket_name>
prefix: velero
provider: aws
configuration:
nodeAgent:
enable: true
uploaderType: restic
velero:
client-burst: 500 (1)
client-qps: 300 (2)
defaultPlugins:
- openshift
- aws
- kubevirt
1 | Specify the client-burst value. In this example, the client-burst field is set to 500. |
2 | Specify the client-qps value. In this example, the client-qps field is set to 300. |
The DPA of OADP uses the nodeSelector
field to select which nodes can run the node agent. The nodeSelector
field is the simplest recommended form of node selection constraint.
Any label specified must match the labels on each node.
The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label:
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector
, which you used for labeling nodes. For example:
configuration:
nodeAgent:
enable: true
podConfig:
nodeSelector:
node-role.kubernetes.io/nodeAgent: ""
The following example is an anti-pattern of nodeSelector
and does not work unless both labels, 'node-role.kubernetes.io/infra: ""'
and 'node-role.kubernetes.io/worker: ""'
, are on the node:
configuration:
nodeAgent:
enable: true
podConfig:
nodeSelector:
node-role.kubernetes.io/infra: ""
node-role.kubernetes.io/worker: ""
If you use cluster storage for your Multicloud Object Gateway (MCG) bucket backupStorageLocation
on OpenShift Data Foundation, create an Object Bucket Claim (OBC) using the OpenShift Web Console.
Failure to configure an Object Bucket Claim (OBC) might lead to backups not being available. |
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa. For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications. |
Create an Object Bucket Claim (OBC) using the OpenShift web console as described in Creating an Object Bucket Claim using the OpenShift Web Console.
You enable the Container Storage Interface (CSI) in the DataProtectionApplication
custom resource (CR) in order to back up persistent volumes with CSI snapshots.
The cloud provider must support CSI snapshots.
Edit the DataProtectionApplication
CR, as in the following example:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
...
spec:
configuration:
velero:
defaultPlugins:
- openshift
- csi (1)
1 | Add the csi default plugin. |
If you are not using Restic
, Kopia
, or DataMover
for your backups, you can disable the nodeAgent
field in the DataProtectionApplication
custom resource (CR). Before you disable nodeAgent
, ensure the OADP Operator is idle and not running any backups.
To disable the nodeAgent
, set the enable
flag to false
. See the following example:
DataProtectionApplication
CR# ...
configuration:
nodeAgent:
enable: false (1)
uploaderType: kopia
# ...
1 | Disables the node agent. |
To enable the nodeAgent
, set the enable
flag to true
. See the following example:
DataProtectionApplication
CR# ...
configuration:
nodeAgent:
enable: true (1)
uploaderType: kopia
# ...
1 | Enables the node agent. |
You can set up a job to enable and disable the nodeAgent
field in the DataProtectionApplication
CR. For more information, see "Running tasks in pods using jobs".