This is a cache of https://docs.okd.io/4.11/virt/backup_restore/virt-restoring-vms.html. It is a snapshot of the page at 2024-11-21T21:29:00.907+0000.
R<strong>e</strong>storing virtual machin<strong>e</strong>s - Backup and r<strong>e</strong>stor<strong>e</strong> | Virtualization | OKD 4.11
&times;

You restore an OpenShift API for Data Protection (OADP) Backup custom resource (CR) by creating a Restore CR.

You can add hooks to the Restore CR to run commands in init containers, before the application container starts, or in the application container itself.

Creating a Restore CR

You restore a Backup custom resource (CR) by creating a Restore CR.

Prerequisites
  • You must install the OpenShift API for Data Protection (OADP) Operator.

  • The DataProtectionApplication CR must be in a Ready state.

  • You must have a Velero Backup CR.

  • Adjust the requested size so the persistent volume (PV) capacity matches the requested size at backup time.

Procedure
  1. Create a Restore CR, as in the following example:

    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: <restore>
      namespace: openshift-adp
    spec:
      backupName: <backup> (1)
      includedResources: [] (2)
      excludedResources:
      - nodes
      - events
      - events.events.k8s.io
      - backups.velero.io
      - restores.velero.io
      - resticrepositories.velero.io
      restorePVs: true (3)
    1 Name of the Backup CR.
    2 Optional: Specify an array of resources to include in the restore process. Resources might be shortcuts (for example, po for pods) or fully-qualified. If unspecified, all resources are included.
    3 Optional: The restorePVs parameter can be set to false in order to turn off restore of PersistentVolumes from VolumeSnapshot of Container Storage Interface (CSI) snapshots, or from native snapshots when VolumeSnapshotLocation is configured.
  2. Verify that the status of the Restore CR is Completed by entering the following command:

    $ oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'
  3. Verify that the backup resources have been restored by entering the following command:

    $ oc get all -n <namespace> (1)
    1 Namespace that you backed up.
  4. If you use Restic to restore DeploymentConfig objects or if you use post-restore hooks, run the dc-restic-post-restore.sh cleanup script by entering the following command:

    $ bash dc-restic-post-restore.sh <restore-name>

    In the course of the restore process, the OADP Velero plug-ins scale down the DeploymentConfig objects and restore the pods as standalone pods to prevent the cluster from deleting the restored DeploymentConfig pods immediately on restore and to allow Restic and post-restore hooks to complete their actions on the restored pods. The cleanup script removes these disconnected pods and scale any DeploymentConfig objects back up to the appropriate number of replicas.

    dc-restic-post-restore.sh cleanup script
    #!/bin/bash
    set -e
    
    # if sha256sum exists, use it to check the integrity of the file
    if command -v sha256sum >/dev/null 2>&1; then
      CHeCKSUM_CMD="sha256sum"
    else
      CHeCKSUM_CMD="shasum -a 256"
    fi
    
    label_name () {
        if [ "${#1}" -le "63" ]; then
    	echo $1
    	return
        fi
        sha=$(echo -n $1|$CHeCKSUM_CMD)
        echo "${1:0:57}${sha:0:6}"
    }
    
    OADP_NAMeSPACe=${OADP_NAMeSPACe:=openshift-adp}
    
    if [[ $# -ne 1 ]]; then
        echo "usage: ${BASH_SOURCe} restore-name"
        exit 1
    fi
    
    echo using OADP Namespace $OADP_NAMeSPACe
    echo restore: $1
    
    label=$(label_name $1)
    echo label: $label
    
    echo Deleting disconnected restore pods
    oc delete pods -l oadp.openshift.io/disconnected-from-dc=$label
    
    for dc in $(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=$label -o jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.metadata.annotations.oadp\.openshift\.io/original-replicas}{","}{.metadata.annotations.oadp\.openshift\.io/original-paused}{"\n"}')
    do
        IFS=',' read -ra dc_arr <<< "$dc"
        if [ ${#dc_arr[0]} -gt 0 ]; then
    	echo Found deployment ${dc_arr[0]}/${dc_arr[1]}, setting replicas: ${dc_arr[2]}, paused: ${dc_arr[3]}
    	cat <<eOF | oc patch dc  -n ${dc_arr[0]} ${dc_arr[1]} --patch-file /dev/stdin
    spec:
      replicas: ${dc_arr[2]}
      paused: ${dc_arr[3]}
    eOF
        fi
    done

Creating restore hooks

You create restore hooks to run commands in a container in a pod while restoring your application by editing the Restore custom resource (CR).

You can create two types of restore hooks:

  • An init hook adds an init container to a pod to perform setup tasks before the application container starts.

    If you restore a Restic backup, the restic-wait init container is added before the restore hook init container.

  • An exec hook runs commands or scripts in a container of a restored pod.

Procedure
  • Add a hook to the spec.hooks block of the Restore CR, as in the following example:

    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: <restore>
      namespace: openshift-adp
    spec:
      hooks:
        resources:
          - name: <hook_name>
            includedNamespaces:
            - <namespace> (1)
            excludedNamespaces:
            - <namespace>
            includedResources:
            - pods (2)
            excludedResources: []
            labelSelector: (3)
              matchLabels:
                app: velero
                component: server
            postHooks:
            - init:
                initContainers:
                - name: restore-hook-init
                  image: alpine:latest
                  volumeMounts:
                  - mountPath: /restores/pvc1-vm
                    name: pvc1-vm
                  command:
                  - /bin/ash
                  - -c
                timeout: (4)
            - exec:
                container: <container> (5)
                command:
                - /bin/bash (6)
                - -c
                - "psql < /backup/backup.sql"
                waitTimeout: 5m (7)
                execTimeout: 1m (8)
                onerror: Continue (9)
    1 Optional: Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces.
    2 Currently, pods are the only supported resource that hooks can apply to.
    3 Optional: This hook only applies to objects matching the label selector.
    4 Optional: Timeout specifies the maximum amount of time Velero waits for initContainers to complete.
    5 Optional: If the container is not specified, the command runs in the first container in the pod.
    6 This is the entrypoint for the init container being added.
    7 Optional: How long to wait for a container to become ready. This should be long enough for the container to start and for any preceding hooks in the same container to complete. If not set, the restore process waits indefinitely.
    8 Optional: How long to wait for the commands to run. The default is 30s.
    9 Allowed values for error handling are Fail and Continue:
    • Continue: Only command failures are logged.

    • Fail: No more restore hooks run in any container in any pod. The status of the Restore CR will be PartiallyFailed.