This is a cache of https://docs.okd.io/4.17/backup_and_restore/application_backup_and_restore/troubleshooting/issues-with-velero-and-admission-webhooks.html. It is a snapshot of the page at 2026-03-08T01:42:38.576+0000.
Issues with Velero and admission webhooks - OADP Application backup and restore | Backup and restore | OKD 4.17
×

Resolve restore failures caused by admission webhooks by applying workarounds for workloads such as Knative and IBM AppConnect resources. This helps you to successfully restore workloads that have mutating or validating admission webhooks.

Velero has limited abilities to resolve admission webhook issues during a restore. If you have workloads with admission webhooks, you might need to use an additional Velero plugin or make changes to how you restore the workload. Typically, workloads with admission webhooks require you to create a resource of a specific kind first. This is especially true if your workload has child resources because admission webhooks typically block child resources.

For example, creating or restoring a top-level object such as service.serving.knative.dev typically creates child resources automatically. If you do this first, you will not need to use Velero to create and restore these resources. This avoids the problem of child resources being blocked by an admission webhook that Velero might use.

Velero plugins are started as separate processes. After a Velero operation has completed, either successfully or not, it exits. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred.

Restoring Knative resources

Resolve issues with restoring Knative resources that use admission webhooks by restoring the top-level service.serving.knative.dev service resource with Velero. This helps you to ensure that Knative resources are restored successfully without admission webhook errors.

Procedure
  • Restore the top level service.serving.knative.dev Service resource by using the following command:

    $ velero restore <restore_name> \
      --from-backup=<backup_name> --include-resources \
      service.serving.knative.dev

Restoring IBM AppConnect resources

Troubleshoot Velero restore failures for IBM® AppConnect resources that use admission webhooks. Verify your webhook rules and check that the installed Operator supports the backup’s version to successfully complete the restore.

Procedure
  1. Check if you have any mutating admission plugins of kind: MutatingWebhookConfiguration in the cluster:

    $ oc get mutatingwebhookconfigurations
  2. Examine the YAML file of each kind: MutatingWebhookConfiguration to ensure that none of its rules block creation of the objects that are experiencing issues. For more information, see the official Kubernetes documentation.

  3. Check that any spec.version in type: Configuration.appconnect.ibm.com/v1beta1 used at backup time is supported by the installed Operator.

Avoiding the Velero plugin panic error

Label a custom Backup Storage Location (BSL) to resolve Velero plugin panic errors during imagestream backups. This action prompts the OADP controller to create the required registry secret when you manage the BSL outside the DataProtectionApplication (DPA) custom resource (CR).

A missing secret can cause a panic error for the Velero plugin during image stream backups. When the backup and the BSL are managed outside the scope of the DPA, the OADP controller does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret parameter.

During the backup operation, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error:

024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item"
backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io,
namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked:
runtime error: index out of range with length 1, stack trace: goroutine 94…
Procedure
  1. Label the custom BSL with the relevant label by using the following command:

    $ oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl
  2. After the BSL is labeled, wait until the DPA reconciles.

    You can force the reconciliation by making any minor change to the DPA itself.

Verification
  • After the DPA is reconciled, confirm that the parameter has been created and that the correct registry data has been populated into it by entering the following command:

    $ oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'

Workaround for OpenShift ADP Controller segmentation fault

Define either velero or cloudstorage in your Data Protection Application (DPA) configuration to prevent indefinite pod crashes. This configuration resolves a segmentation fault in the openshift-adp-controller-manager pod that occurs when both components are enabled.

Define either velero or cloudstorage when you configure a DPA. Otherwise, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault due to the following settings:

  • If you define both velero and cloudstorage, the openshift-adp-controller-manager fails.

  • If you do not define both velero and cloudstorage, the openshift-adp-controller-manager fails.

For more information about this issue, see OADP-1054.