This is a cache of https://docs.okd.io/latest/edge_computing/image_based_upgrade/preparing_for_image_based_upgrade/cnf-image-based-upgrade-install-operators.html. It is a snapshot of the page at 2024-11-23T19:26:22.637+0000.
Installing Operators for the image-based <strong>upgrade</strong> - Image-based <strong>upgrade</strong> for single-node OpenShift clusters | Edge computing | OKD 4
×

Installing the Lifecycle Agent by using the CLI

You can use the OpenShift CLI (oc) to install the Lifecycle Agent.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

Procedure
  1. Create a Namespace object YAML file for the Lifecycle Agent:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-lifecycle-agent
      annotations:
        workload.openshift.io/allowed: management
    1. Create the Namespace CR by running the following command:

      $ oc create -f <namespace_filename>.yaml
  2. Create an OperatorGroup object YAML file for the Lifecycle Agent:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-lifecycle-agent
      namespace: openshift-lifecycle-agent
    spec:
      targetNamespaces:
      - openshift-lifecycle-agent
    1. Create the OperatorGroup CR by running the following command:

      $ oc create -f <operatorgroup_filename>.yaml
  3. Create a Subscription CR for the Lifecycle Agent:

    apiVersion: operators.coreos.com/v1
    kind: Subscription
    metadata:
      name: openshift-lifecycle-agent-subscription
      namespace: openshift-lifecycle-agent
    spec:
      channel: "stable"
      name: lifecycle-agent
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    1. Create the Subscription CR by running the following command:

      $ oc create -f <subscription_filename>.yaml
Verification
  1. To verify that the installation succeeded, inspect the CSV resource by running the following command:

    $ oc get csv -n openshift-lifecycle-agent
    Example output
    NAME                              DISPLAY                     VERSION               REPLACES                           PHASE
    lifecycle-agent.v4.0           Openshift Lifecycle Agent   4.0                Succeeded
  2. Verify that the Lifecycle Agent is up and running by running the following command:

    $ oc get deploy -n openshift-lifecycle-agent
    Example output
    NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
    lifecycle-agent-controller-manager   1/1     1            1           14s

Installing the Lifecycle Agent by using the web console

You can use the OKD web console to install the Lifecycle Agent.

Prerequisites
  • You have logged in as a user with cluster-admin privileges.

Procedure
  1. In the OKD web console, navigate to OperatorsOperatorHub.

  2. Search for the Lifecycle Agent from the list of available Operators, and then click Install.

  3. On the Install Operator page, under A specific namespace on the cluster select openshift-lifecycle-agent.

  4. Click Install.

Verification
  1. To confirm that the installation is successful:

    1. Click OperatorsInstalled Operators.

    2. Ensure that the Lifecycle Agent is listed in the openshift-lifecycle-agent project with a Status of InstallSucceeded.

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

If the Operator is not installed successfully:

  1. Click OperatorsInstalled Operators, and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.

  2. Click WorkloadsPods, and check the logs for pods in the openshift-lifecycle-agent project.

Installing the Lifecycle Agent with GitOps ZTP

Install the Lifecycle Agent with GitOps Zero Touch Provisioning (ZTP) to do an image-based upgrade.

Procedure
  1. Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory:

    Example LcaSubscriptionNS.yaml file
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-lifecycle-agent
      annotations:
        workload.openshift.io/allowed: management
        ran.openshift.io/ztp-deploy-wave: "2"
      labels:
        kubernetes.io/metadata.name: openshift-lifecycle-agent
    Example LcaSubscriptionOperGroup.yaml file
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: lifecycle-agent-operatorgroup
      namespace: openshift-lifecycle-agent
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      targetNamespaces:
        - openshift-lifecycle-agent
    Example LcaSubscription.yaml file
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: lifecycle-agent
      namespace: openshift-lifecycle-agent
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      channel: "stable"
      name: lifecycle-agent
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      installPlanApproval: Manual
    status:
      state: AtLatestKnown
    Example directory structure
    ├── kustomization.yaml
    ├── sno
    │   ├── example-cnf.yaml
    │   ├── common-ranGen.yaml
    │   ├── group-du-sno-ranGen.yaml
    │   ├── group-du-sno-validator-ranGen.yaml
    │   └── ns.yaml
    ├── source-crs
    │   ├── LcaSubscriptionNS.yaml
    │   ├── LcaSubscriptionOperGroup.yaml
    │   ├── LcaSubscription.yaml
  2. Add the CRs to your common PolicyGenTemplate:

    apiVersion: ran.openshift.io/v1
    kind: PolicyGenTemplate
    metadata:
      name: "example-common-latest"
      namespace: "ztp-common"
    spec:
      bindingRules:
        common: "true"
        du-profile: "latest"
      sourceFiles:
        - fileName: LcaSubscriptionNS.yaml
          policyName: "subscriptions-policy"
        - fileName: LcaSubscriptionOperGroup.yaml
          policyName: "subscriptions-policy"
        - fileName: LcaSubscription.yaml
          policyName: "subscriptions-policy"
    [...]

Installing and configuring the OADP Operator with GitOps ZTP

Install and configure the OADP Operator with GitOps ZTP before starting the upgrade.

Procedure
  1. Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory:

    Example OadpSubscriptionNS.yaml file
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
      labels:
        kubernetes.io/metadata.name: openshift-adp
    Example OadpSubscriptionOperGroup.yaml file
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: redhat-oadp-operator
      namespace: openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      targetNamespaces:
      - openshift-adp
    Example OadpSubscription.yaml file
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: redhat-oadp-operator
      namespace: openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      channel: stable-1.4
      name: redhat-oadp-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      installPlanApproval: Manual
    status:
      state: AtLatestKnown
    Example OadpOperatorStatus.yaml file
    apiVersion: operators.coreos.com/v1
    kind: Operator
    metadata:
      name: redhat-oadp-operator.openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    status:
      components:
        refs:
        - kind: Subscription
          namespace: openshift-adp
          conditions:
          - type: CatalogSourcesUnhealthy
            status: "False"
        - kind: InstallPlan
          namespace: openshift-adp
          conditions:
          - type: Installed
            status: "True"
        - kind: ClusterServiceVersion
          namespace: openshift-adp
          conditions:
          - type: Succeeded
            status: "True"
            reason: InstallSucceeded
    Example directory structure
    ├── kustomization.yaml
    ├── sno
    │   ├── example-cnf.yaml
    │   ├── common-ranGen.yaml
    │   ├── group-du-sno-ranGen.yaml
    │   ├── group-du-sno-validator-ranGen.yaml
    │   └── ns.yaml
    ├── source-crs
    │   ├── OadpSubscriptionNS.yaml
    │   ├── OadpSubscriptionOperGroup.yaml
    │   ├── OadpSubscription.yaml
    │   ├── OadpOperatorStatus.yaml
  2. Add the CRs to your common PolicyGenTemplate:

    apiVersion: ran.openshift.io/v1
    kind: PolicyGenTemplate
    metadata:
      name: "example-common-latest"
      namespace: "ztp-common"
    spec:
      bindingRules:
        common: "true"
        du-profile: "latest"
      sourceFiles:
        - fileName: OadpSubscriptionNS.yaml
          policyName: "subscriptions-policy"
        - fileName: OadpSubscriptionOperGroup.yaml
          policyName: "subscriptions-policy"
        - fileName: OadpSubscription.yaml
          policyName: "subscriptions-policy"
        - fileName: OadpOperatorStatus.yaml
          policyName: "subscriptions-policy"
    [...]
  3. Create the DataProtectionApplication CR and the S3 secret only for the target cluster:

    1. Extract the following CRs from the ztp-site-generate container image and push them to the source-cr directory:

      Example DataProtectionApplication.yaml file
      apiVersion: oadp.openshift.io/v1alpha1
      kind: DataProtectionApplication
      metadata:
        name: dataprotectionapplication
        namespace: openshift-adp
        annotations:
          ran.openshift.io/ztp-deploy-wave: "100"
      spec:
        configuration:
          restic:
            enable: false (1)
          velero:
            defaultPlugins:
              - aws
              - openshift
            resourceTimeout: 10m
        backupLocations:
          - velero:
              config:
                profile: "default"
                region: minio
                s3Url: $url
                insecureSkipTLSVerify: "true"
                s3ForcePathStyle: "true"
              provider: aws
              default: true
              credential:
                key: cloud
                name: cloud-credentials
              objectStorage:
                bucket: $bucketName (2)
                prefix: $prefixName (2)
      status:
        conditions:
        - reason: Complete
          status: "True"
          type: Reconciled
      1 The spec.configuration.restic.enable field must be set to false for an image-based upgrade because persistent volume contents are retained and reused after the upgrade.
      2 The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the RHACM hub template function, for example, prefix: {{hub .ManagedClusterName hub}}.
      Example OadpSecret.yaml file
      apiVersion: v1
      kind: Secret
      metadata:
        name: cloud-credentials
        namespace: openshift-adp
        annotations:
          ran.openshift.io/ztp-deploy-wave: "100"
      type: Opaque
      Example OadpBackupStorageLocationStatus.yaml file
      apiVersion: velero.io/v1
      kind: BackupStorageLocation
      metadata:
        namespace: openshift-adp
        annotations:
          ran.openshift.io/ztp-deploy-wave: "100"
      status:
        phase: Available

      The OadpBackupStorageLocationStatus.yaml CR verifies the availability of backup storage locations created by OADP.

    2. Add the CRs to your site PolicyGenTemplate with overrides:

      apiVersion: ran.openshift.io/v1
      kind: PolicyGenTemplate
      metadata:
        name: "example-cnf"
        namespace: "ztp-site"
      spec:
        bindingRules:
          sites: "example-cnf"
          du-profile: "latest"
        mcp: "master"
        sourceFiles:
          ...
          - fileName: OadpSecret.yaml
            policyName: "config-policy"
            data:
              cloud: <your_credentials> (1)
          - fileName: DataProtectionApplication.yaml
            policyName: "config-policy"
            spec:
              backupLocations:
                - velero:
                    config:
                      region: minio
                      s3Url: <your_S3_URL> (2)
                      profile: "default"
                      insecureSkipTLSVerify: "true"
                      s3ForcePathStyle: "true"
                    provider: aws
                    default: true
                    credential:
                      key: cloud
                      name: cloud-credentials
                    objectStorage:
                      bucket: <your_bucket_name> (3)
                      prefix: <cluster_name> (3)
          - fileName: OadpBackupStorageLocationStatus.yaml
            policyName: "config-policy"
      1 Specify your credentials for your S3 storage backend.
      2 Specify the URL for your S3-compatible bucket.
      3 The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the RHACM hub template function, for example, prefix: {{hub .ManagedClusterName hub}}.