apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: test-obc (1)
namespace: openshift-adp
spec:
storageClassName: openshift-storage.noobaa.io
generateBucketName: test-backup-bucket (2)
Following is a use case for using OADP and ODF to back up an application.
In this use case, you back up an application by using OADP and store the backup in an object storage provided by Red Hat OpenShift Data Foundation (ODF).
You create a object bucket claim (OBC) to configure the backup storage location. You use ODF to configure an Amazon S3-compatible object storage bucket. ODF provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location.
You use the NooBaa MCG service with OADP by using the aws
provider plugin.
You configure the Data Protection Application (DPA) with the backup storage location (BSL).
You create a backup custom resource (CR) and specify the application namespace to back up.
You create and verify the backup.
You installed the OADP Operator.
You installed the ODF Operator.
You have an application with a database running in a separate namespace.
Create an OBC manifest file to request a NooBaa MCG bucket as shown in the following example:
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: test-obc (1)
namespace: openshift-adp
spec:
storageClassName: openshift-storage.noobaa.io
generateBucketName: test-backup-bucket (2)
1 | The name of the object bucket claim. |
2 | The name of the bucket. |
Create the OBC by running the following command:
$ oc create -f <obc_file_name> (1)
1 | Specify the file name of the object bucket claim manifest. |
When you create an OBC, ODF creates a secret
and a config map
with the same name as the object bucket claim. The secret
has the bucket credentials, and the config map
has information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command:
$ oc extract --to=- cm/test-obc (1)
1 | test-obc is the name of the OBC. |
# BUCKET_NAME
backup-c20...41fd
# BUCKET_PORT
443
# BUCKET_REGION
# BUCKET_SUBREGION
# BUCKET_HOST
s3.openshift-storage.svc
To get the bucket credentials from the generated secret
, run the following command:
$ oc extract --to=- secret/test-obc
# AWS_ACCESS_KEY_ID
ebYR....xLNMc
# AWS_secret_ACCESS_KEY
YXf...+NaCkdyC3QPym
Get the public URL for the S3 endpoint from the s3 route in the openshift-storage
namespace by running the following command:
$ oc get route s3 -n openshift-storage
Create a cloud-credentials
file with the object bucket credentials as shown in the following command:
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_secret_ACCESS_KEY>
Create the cloud-credentials
secret with the cloud-credentials
file content as shown in the following command:
$ oc create secret generic \
cloud-credentials \
-n openshift-adp \
--from-file cloud=cloud-credentials
Configure the Data Protection Application (DPA) as shown in the following example:
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: oadp-backup
namespace: openshift-adp
spec:
configuration:
nodeAgent:
enable: true
uploaderType: kopia
velero:
defaultPlugins:
- aws
- openshift
- csi
defaultSnapshotMoveData: true (1)
backupLocations:
- velero:
config:
profile: "default"
region: noobaa
s3Url: https://s3.openshift-storage.svc (2)
s3ForcePathStyle: "true"
insecureSkipTLSVerify: "true"
provider: aws
default: true
credential:
key: cloud
name: cloud-credentials
objectStorage:
bucket: <bucket_name> (3)
prefix: oadp
1 | Set to true to use the OADP Data Mover to enable movement of Container Storage Interface (CSI) snapshots to a remote object storage. |
2 | This is the S3 URL of ODF storage. |
3 | Specify the bucket name. |
Create the DPA by running the following command:
$ oc apply -f <dpa_filename>
Verify that the DPA is created successfully by running the following command. In the example output, you can see the status
object has type
field set to Reconciled
. This means, the DPA is successfully created.
$ oc get dpa -o yaml
apiVersion: v1
items:
- apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
namespace: openshift-adp
#...#
spec:
backupLocations:
- velero:
config:
#...#
status:
conditions:
- lastTransitionTime: "20....9:54:02Z"
message: Reconcile complete
reason: Complete
status: "True"
type: Reconciled
kind: List
metadata:
resourceVersion: ""
Verify that the backup storage location (BSL) is available by running the following command:
$ oc get bsl -n openshift-adp
NAME PHASE LAST VALIDATED AGE DEFAULT
dpa-sample-1 Available 3s 15s true
Configure a backup CR as shown in the following example:
apiVersion: velero.io/v1
kind: Backup
metadata:
name: test-backup
namespace: openshift-adp
spec:
includedNamespaces:
- <application_namespace> (1)
1 | Specify the namespace for the application to back up. |
Create the backup CR by running the following command:
$ oc apply -f <backup_cr_filename>
Verify that the backup object is in the Completed
phase by running the following command. For more details, see the example output.
$ oc describe backup test-backup -n openshift-adp
Name: test-backup
Namespace: openshift-adp
# ....#
Status:
Backup Item Operations Attempted: 1
Backup Item Operations Completed: 1
Completion Timestamp: 2024-09-25T10:17:01Z
Expiration: 2024-10-25T10:16:31Z
Format Version: 1.1.0
Hook Status:
Phase: Completed
Progress:
Items Backed Up: 34
Total Items: 34
Start Timestamp: 2024-09-25T10:16:31Z
Version: 1
Events: <none>