This is a cache of https://docs.okd.io/latest/backup_and_restore/application_backup_and_restore/installing/installing-oadp-gcp.html. It is a snapshot of the page at 2024-11-26T19:22:33.894+0000.
Configuring OADP with GCP - OADP Application backup and restore | Backup and restore | OKD 4
×

You install the OpenShift API for Data Protection (OADP) with Google Cloud Platform (GCP) by installing the OADP Operator. The Operator installs Velero 1.14.

Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.

You configure GCP for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.

To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details.

Configuring Google Cloud Platform

You configure Google Cloud Platform (GCP) for the OpenShift API for Data Protection (OADP).

Prerequisites
Procedure
  1. Log in to GCP:

    $ gcloud auth login
  2. Set the BUCKET variable:

    $ BUCKET=<bucket> (1)
    1 Specify your bucket name.
  3. Create the storage bucket:

    $ gsutil mb gs://$BUCKET/
  4. Set the PROJECT_ID variable to your active project:

    $ PROJECT_ID=$(gcloud config get-value project)
  5. Create a service account:

    $ gcloud iam service-accounts create velero \
        --display-name "Velero service account"
  6. List your service accounts:

    $ gcloud iam service-accounts list
  7. Set the SERVICE_ACCOUNT_EMAIL variable to match its email value:

    $ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
        --filter="displayName:Velero service account" \
        --format 'value(email)')
  8. Attach the policies to give the velero user the minimum necessary permissions:

    $ ROLE_PERMISSIONS=(
        compute.disks.get
        compute.disks.create
        compute.disks.createSnapshot
        compute.snapshots.get
        compute.snapshots.create
        compute.snapshots.useReadOnly
        compute.snapshots.delete
        compute.zones.get
        storage.objects.create
        storage.objects.delete
        storage.objects.get
        storage.objects.list
        iam.serviceAccounts.signBlob
    )
  9. Create the velero.server custom role:

    $ gcloud iam roles create velero.server \
        --project $PROJECT_ID \
        --title "Velero Server" \
        --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
  10. Add IAM policy binding to the project:

    $ gcloud projects add-iam-policy-binding $PROJECT_ID \
        --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
        --role projects/$PROJECT_ID/roles/velero.server
  11. Update the IAM service account:

    $ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
  12. Save the IAM service account keys to the credentials-velero file in the current directory:

    $ gcloud iam service-accounts keys create credentials-velero \
        --iam-account $SERVICE_ACCOUNT_EMAIL

    You use the credentials-velero file to create a Secret object for GCP before you install the Data Protection Application.

About backup and snapshot locations and their secrets

You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR).

Backup locations

You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Red Hat Container Storage; Ceph RADOS Gateway, also known as Ceph Object Gateway; Red Hat OpenShift Data Foundation; or MinIO.

Velero backs up OKD resources, Kubernetes objects, and internal images as an archive file on object storage.

Snapshot locations

If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.

If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.

If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.

Secrets

If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.

If the backup and snapshot locations use different credentials, you create two secret objects:

  • Custom Secret for the backup location, which you specify in the DataProtectionApplication CR.

  • Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR.

The Data Protection Application requires a default Secret. Otherwise, the installation will fail.

If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.

Creating a default Secret

You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.

The default name of the Secret is cloud-credentials-gcp.

The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.

If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.

Prerequisites
  • Your object storage and cloud storage, if any, must use the same credentials.

  • You must configure object storage for Velero.

  • You must create a credentials-velero file for the object storage in the appropriate format.

Procedure
  • Create a Secret with the default name:

    $ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero

The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.

Creating secrets for different credentials

If your backup and snapshot locations use different credentials, you must create two Secret objects:

  • Backup location Secret with a custom name. The custom name is specified in the spec.backupLocations block of the DataProtectionApplication custom resource (CR).

  • Snapshot location Secret with the default name, cloud-credentials-gcp. This Secret is not specified in the DataProtectionApplication CR.

Procedure
  1. Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider.

  2. Create a Secret for the snapshot location with the default name:

    $ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
  3. Create a credentials-velero file for the backup location in the appropriate format for your object storage.

  4. Create a Secret for the backup location with a custom name:

    $ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
  5. Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
      namespace: openshift-adp
    spec:
    ...
      backupLocations:
        - velero:
            provider: gcp
            default: true
            credential:
              key: cloud
              name: <custom_secret> (1)
            objectStorage:
              bucket: <bucket_name>
              prefix: <prefix>
      snapshotLocations:
        - velero:
            provider: gcp
            default: true
            config:
              project: <project>
              snapshotLocation: us-west1
    1 Backup location Secret with custom name.

Configuring the Data Protection Application

You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates.

Setting Velero CPU and memory resource allocations

You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.

Prerequisites
  • You must have the OpenShift API for Data Protection (OADP) Operator installed.

Procedure
  • Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
    spec:
    # ...
      configuration:
        velero:
          podConfig:
            nodeSelector: <node_selector> (1)
            resourceAllocations: (2)
              limits:
                cpu: "1"
                memory: 1024Mi
              requests:
                cpu: 200m
                memory: 256Mi
    1 Specify the node selector to be supplied to Velero podSpec.
    2 The resourceAllocations listed are for average usage.

Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.

Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.

Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.

Enabling self-signed CA certificates

You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.

Prerequisites
  • You must have the OpenShift API for Data Protection (OADP) Operator installed.

Procedure
  • Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
    spec:
    # ...
      backupLocations:
        - name: default
          velero:
            provider: aws
            default: true
            objectStorage:
              bucket: <bucket>
              prefix: <prefix>
              caCert: <base64_encoded_cert_string> (1)
            config:
              insecureSkipTLSVerify: "false" (2)
    # ...
    1 Specify the Base64-encoded CA certificate string.
    2 The insecureSkipTLSVerify configuration can be set to either "true" or "false". If set to "true", SSL/TLS security is disabled. If set to "false", SSL/TLS security is enabled.

Using CA certificates with the velero command aliased for Velero deployment

You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.

Prerequisites
  • You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role.

  • You must have the OpenShift CLI (oc) installed.

    1. To use an aliased Velero command, run the following command:

      $ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
    2. Check that the alias is working by running the following command:

      Example
      $ velero version
      Client:
      	Version: v1.12.1-OADP
      	Git commit: -
      Server:
      	Version: v1.12.1-OADP
    3. To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:

      $ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')
      
      $ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
      $ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
    4. To fetch the backup logs, run the following command:

      $ velero backup logs  <backup_name>  --cacert /tmp/<your_cacert.txt>

      You can use these logs to view failures and warnings for the resources that you cannot back up.

    5. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the previous step.

    6. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command:

      $ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt"
      /tmp/your-cacert.txt

In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.

Google workload identity federation cloud authentication

Applications running outside Google Cloud use service account keys, such as usernames and passwords, to gain access to Google Cloud resources. These service account keys might become a security risk if they are not properly managed.

With Google’s workload identity federation, you can use Identity and Access Management (IAM) to offer IAM roles, including the ability to impersonate service accounts, to external identities. This eliminates the maintenance and security risks associated with service account keys.

Workload identity federation handles encrypting and decrypting certificates, extracting user attributes, and validation. Identity federation externalizes authentication, passing it over to Security Token Services (STS), and reduces the demands on individual developers. Authorization and controlling access to resources remain the responsibility of the application.

Google workload identity federation is available for OADP 1.3.x and later.

When backing up volumes, OADP on GCP with Google workload identity federation authentication only supports CSI snapshots.

OADP on GCP with Google workload identity federation authentication does not support Volume Snapshot Locations (VSL) backups. For more details, see Google workload identity federation known issues.

If you do not use Google workload identity federation cloud authentication, continue to Installing the Data Protection Application.

Prerequisites
  • You have installed a cluster in manual mode with GCP Workload Identity configured.

  • You have access to the Cloud Credential Operator utility (ccoctl) and to the associated workload identity pool.

Procedure
  1. Create an oadp-credrequest directory by running the following command:

    $ mkdir -p oadp-credrequest
  2. Create a CredentialsRequest.yaml file as following:

    echo 'apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: oadp-operator-credentials
      namespace: openshift-cloud-credential-operator
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: GCPProviderSpec
        permissions:
        - compute.disks.get
        - compute.disks.create
        - compute.disks.createSnapshot
        - compute.snapshots.get
        - compute.snapshots.create
        - compute.snapshots.useReadOnly
        - compute.snapshots.delete
        - compute.zones.get
        - storage.objects.create
        - storage.objects.delete
        - storage.objects.get
        - storage.objects.list
        - iam.serviceAccounts.signBlob
        skipServiceCheck: true
      secretRef:
        name: cloud-credentials-gcp
        namespace: <OPERATOR_INSTALL_NS>
      serviceAccountNames:
      - velero
    ' > oadp-credrequest/credrequest.yaml
  3. Use the ccoctl utility to process the CredentialsRequest objects in the oadp-credrequest directory by running the following command:

    $ ccoctl gcp create-service-accounts \
        --name=<name> \
        --project=<gcp_project_id> \
        --credentials-requests-dir=oadp-credrequest \
        --workload-identity-pool=<pool_id> \
        --workload-identity-provider=<provider_id>

    The manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml file is now available to use in the following steps.

  4. Create a namespace by running the following command:

    $ oc create namespace <OPERATOR_INSTALL_NS>
  5. Apply the credentials to the namespace by running the following command:

    $ oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml

Google workload identity federation known issues

  • Volume Snapshot Location (VSL) backups finish with a PartiallyFailed phase when GCP workload identity federation is configured. Google workload identity federation authentication does not support VSL backups.

Installing the Data Protection Application

You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.

Prerequisites
  • You must install the OADP Operator.

  • You must configure object storage as a backup location.

  • If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.

  • If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-gcp.

  • If the backup and snapshot locations use different credentials, you must create two Secrets:

    • Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR.

    • Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR.

    If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret, the installation will fail.

Procedure
  1. Click OperatorsInstalled Operators and select the OADP Operator.

  2. Under Provided APIs, click Create instance in the DataProtectionApplication box.

  3. Click YAML View and update the parameters of the DataProtectionApplication manifest:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
      namespace: <OPERATOR_INSTALL_NS> (1)
    spec:
      configuration:
        velero:
          defaultPlugins:
            - gcp
            - openshift (2)
          resourceTimeout: 10m (3)
        nodeAgent: (4)
          enable: true (5)
          uploaderType: kopia (6)
          podConfig:
            nodeSelector: <node_selector> (7)
      backupLocations:
        - velero:
            provider: gcp
            default: true
            credential:
              key: cloud (8)
              name: cloud-credentials-gcp (9)
            objectStorage:
              bucket: <bucket_name> (10)
              prefix: <prefix> (11)
      snapshotLocations: (12)
        - velero:
            provider: gcp
            default: true
            config:
              project: <project>
              snapshotLocation: us-west1 (13)
            credential:
              key: cloud
              name: cloud-credentials-gcp (14)
      backupImages: true (15)
    1 The default namespace for OADP is openshift-adp. The namespace is a variable and is configurable.
    2 The openshift plugin is mandatory.
    3 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
    4 The administrative agent that routes the administrative requests to servers.
    5 Set this value to true if you want to enable nodeAgent and perform File System Backup.
    6 Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR.
    7 Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
    8 Secret key that contains credentials. For Google workload identity federation cloud authentication use service_account.json.
    9 Secret name that contains credentials. If you do not specify this value, the default name, cloud-credentials-gcp, is used.
    10 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
    11 Specify a prefix for Velero backups, for example, velero, if the bucket is used for multiple purposes.
    12 Specify a snapshot location, unless you use CSI snapshots or Restic to back up PVs.
    13 The snapshot location must be in the same region as the PVs.
    14 Specify the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials-gcp, is used. If you specify a custom name, the custom name is used for the backup location.
    15 Google workload identity federation supports internal image backup. Set this field to false if you do not want to use image backup.
  4. Click Create.

Verification
  1. Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:

    $ oc get all -n openshift-adp
    Example output
    NAME                                                     READY   STATUS    RESTARTS   AGE
    pod/oadp-operator-controller-manager-67d9494d47-6l8z8    2/2     Running   0          2m8s
    pod/node-agent-9cq4q                                     1/1     Running   0          94s
    pod/node-agent-m4lts                                     1/1     Running   0          94s
    pod/node-agent-pv4kr                                     1/1     Running   0          95s
    pod/velero-588db7f655-n842v                              1/1     Running   0          95s
    
    NAME                                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/oadp-operator-controller-manager-metrics-service   ClusterIP   172.30.70.140    <none>        8443/TCP   2m8s
    service/openshift-adp-velero-metrics-svc                   ClusterIP   172.30.10.0      <none>        8085/TCP   8h
    
    NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/node-agent    3         3         3       3            3           <none>          96s
    
    NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/oadp-operator-controller-manager    1/1     1            1           2m9s
    deployment.apps/velero                              1/1     1            1           96s
    
    NAME                                                           DESIRED   CURRENT   READY   AGE
    replicaset.apps/oadp-operator-controller-manager-67d9494d47    1         1         1       2m9s
    replicaset.apps/velero-588db7f655                              1         1         1       96s
  2. Verify that the DataProtectionApplication (DPA) is reconciled by running the following command:

    $ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
    Example output
    {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
  3. Verify the type is set to Reconciled.

  4. Verify the backup storage location and confirm that the PHASE is Available by running the following command:

    $ oc get backupStorageLocation -n openshift-adp
    Example output
    NAME           PHASE       LAST VALIDATED   AGE     DEFAULT
    dpa-sample-1   Available   1s               3d16h   true

Configuring the DPA with client burst and QPS settings

The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.

You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.

Prerequisites
  • You have installed the OADP Operator.

Procedure
  • Configure the client-burst and the client-qps fields in the DPA as shown in the following example:

    Example Data Protection Application
    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: test-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
        - name: default
          velero:
            config:
              insecureSkipTLSVerify: "true"
              profile: "default"
              region: <bucket_region>
              s3ForcePathStyle: "true"
              s3Url: <bucket_url>
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: velero
            provider: aws
      configuration:
        nodeAgent:
          enable: true
          uploaderType: restic
        velero:
          client-burst: 500 (1)
          client-qps: 300 (2)
          defaultPlugins:
            - openshift
            - aws
            - kubevirt
    1 Specify the client-burst value. In this example, the client-burst field is set to 500.
    2 Specify the client-qps value. In this example, the client-qps field is set to 300.

Configuring node agents and node labels

The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint.

Any label specified must match the labels on each node.

The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label:

$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""

Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector, which you used for labeling nodes. For example:

configuration:
  nodeAgent:
    enable: true
    podConfig:
      nodeSelector:
        node-role.kubernetes.io/nodeAgent: ""

The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""', are on the node:

    configuration:
      nodeAgent:
        enable: true
        podConfig:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
            node-role.kubernetes.io/worker: ""

Enabling CSI in the DataProtectionApplication CR

You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.

Prerequisites
  • The cloud provider must support CSI snapshots.

Procedure
  • Edit the DataProtectionApplication CR, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    ...
    spec:
      configuration:
        velero:
          defaultPlugins:
          - openshift
          - csi (1)
    1 Add the csi default plugin.

Disabling the node agent in DataProtectionApplication

If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.

Procedure
  1. To disable the nodeAgent, set the enable flag to false. See the following example:

    Example DataProtectionApplication CR
    # ...
    configuration:
      nodeAgent:
        enable: false  (1)
        uploaderType: kopia
    # ...
    1 Disables the node agent.
  2. To enable the nodeAgent, set the enable flag to true. See the following example:

    Example DataProtectionApplication CR
    # ...
    configuration:
      nodeAgent:
        enable: true  (1)
        uploaderType: kopia
    # ...
    1 Enables the node agent.

You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs".