This is a cache of https://docs.openshift.com/container-platform/4.2/migration/migrating_4_2_4/deploying-cam-4-2-4.html. It is a snapshot of the page at 2024-11-30T02:19:08.957+0000.
Deploying the Cluster Application Migration tool - Migrating from Openshift Container Platform 4.2 | Migration | OpenShift Container Platform 4.2
×

You can install the Cluster Application Migration Operator on your OpenShift Container Platform 4.2 target cluster and 4.2 source cluster. The Cluster Application Migration Operator installs the Cluster Application Migration (CAM) tool on the target cluster by default.

Optional: You can configure the Cluster Application Migration Operator to install the CAM tool on an OpenShift Container Platform 3 cluster or on a remote cluster.

In a restricted environment, you can install the Cluster Application Migration Operator from a local mirror registry.

After you have installed the Cluster Application Migration Operator on your clusters, you can launch the CAM tool.

Installing the Cluster Application Migration Operator

You can install the Cluster Application Migration Operator with the Operation Lifecycle Manager (OLM) on an OpenShift Container Platform 4.2 target cluster and on an OpenShift Container Platform 4.1 source cluster.

Installing the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 target cluster

You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 target cluster with the Operation Lifecycle Manager (OLM).

The Cluster Application Migration Operator installs the Cluster Application Migration tool on the target cluster by default.

Procedure
  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.

  2. Use the Filter by keyword field (in this case, Migration) to find the Cluster Application Migration Operator.

  3. Select the Cluster Application Migration Operator and click Install.

  4. On the Create Operator Subscription page, select the openshift-migration namespace, and specify an approval strategy.

  5. Click Subscribe.

    On the Installed Operators page, the Cluster Application Migration Operator appears in the openshift-migration project with the status InstallSucceeded.

  6. Under Provided APIs, click View 12 more…​.

  7. Click Create NewMigrationController.

  8. Click Create.

  9. Click WorkloadsPods to verify that the Controller Manager, Migration UI, Restic, and Velero Pods are running.

Installing the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 source cluster

You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4 source cluster with the Operation Lifecycle Manager (OLM).

Procedure
  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.

  2. Use the Filter by keyword field (in this case, Migration) to find the Cluster Application Migration Operator.

  3. Select the Cluster Application Migration Operator and click Install.

  4. On the Create Operator Subscription page, select the openshift-migration namespace, and specify an approval strategy.

  5. Click Subscribe.

    On the Installed Operators page, the Cluster Application Migration Operator appears in the openshift-migration project with the status InstallSucceeded.

  6. Under Provided APIs, click View 12 more…​.

  7. Click Create NewMigrationController.

  8. Update the migration_controller and migration_ui parameters in the spec stanza:

    spec:
      ...
      migration_controller: false
      migration_ui: false
      ...
  9. Click Create.

  10. Click WorkloadsPods to verify that the Restic and Velero Pods are running.

Installing the Cluster Application Migration Operator in a restricted environment

You can build a custom Operator catalog image for OpenShift Container Platform 4, push it to a local mirror image registry, and configure OLM to install the Operator from the local registry.

Configuring OperatorHub for restricted networks

Cluster administrators can configure OLM and OperatorHub to use local content in restricted network environments.

Prerequisites
  • Cluster administrator access to an OpenShift Container Platform cluster and its internal registry.

  • Separate workstation without network restrictions.

  • If pushing images to the OpenShift Container Platform cluster’s internal registry, the registry must be exposed with a route.

  • podman version 1.4.4+

Procedure
  1. Disable the default OperatorSources.

    Add disableAllDefaultSources: true to the spec:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

    This disables the default OperatorSources that are configured by default during an OpenShift Container Platform installation.

  2. Retrieve package lists.

    To get the list of packages that are available for the default OperatorSources, run the following curl commands from your workstation without network restrictions:

    $ curl https://quay.io/cnr/api/v1/packages?namespace=redhat-operators > packages.txt
    $ curl https://quay.io/cnr/api/v1/packages?namespace=community-operators >> packages.txt
    $ curl https://quay.io/cnr/api/v1/packages?namespace=certified-operators >> packages.txt

    Each package in the new packages.txt is an Operator that you could add to your restricted network catalog. From this list, you could either pull every Operator or a subset that you would like to expose to users.

  3. Pull Operator content.

    For a given Operator in the package list, you must pull the latest released content:

    $ curl https://quay.io/cnr/api/v1/packages/<namespace>/<operator_name>/<release>

    This example uses the etcd Operator:

    1. Retrieve the digest:

      $ curl https://quay.io/cnr/api/v1/packages/community-operators/etcd/0.0.12
    2. From that JSON, take the digest and use it to pull the gzipped archive:

      $ curl -XGET https://quay.io/cnr/api/v1/packages/community-operators/etcd/blobs/sha256/8108475ee5e83a0187d6d0a729451ef1ce6d34c44a868a200151c36f3232822b \
          -o etcd.tar.gz
    3. To pull the information out, you must untar the archive into a manifests/<operator_name>/ directory with all the other Operators that you want. For example, to untar to an existing directory called manifests/etcd/:

      $ mkdir -p manifests/etcd/ (1)
      $ tar -xf etcd.tar.gz -C manifests/etcd/
      1 Create different subdirectories for each extracted archive so that files are not overwritten by subsequent extractions for other Operators.
  4. Break apart bundle.yaml content, if necessary.

    In your new manifests/<operator_name> directory, the goal is to get your bundle in the following directory structure:

    manifests/
    └── etcd
        ├── 0.0.12
        │   ├── clusterserviceversion.yaml
        │   └── customresourcedefinition.yaml
        └── package.yaml

    If you see files already in this structure, you can skip this step. However, if you instead see only a single file called bundle.yaml, you must first break this file up to conform to the required structure.

    You must separate the CSV content under data.clusterServiceVersion (each file in the list), the CRD content under data.customResourceDefinition (each file in the list), and the package content under data.Package into their own files.

    1. For the CSV file creation, find the following lines in the bundle.yaml file:

      data:
        clusterServiceVersions: |

      Omit those lines, but save a new file consisting of the full CSV resource content beginning with the following lines, removing the prepended - character:

      Example clusterserviceversion.yaml file snippet
      apiVersion: operators.coreos.com/v1alpha1
      kind: ClusterServiceVersion
      [...]
    2. For the CRD file creation, find the following line in the bundle.yaml file:

        customResourceDefinitions: |

      Omit this line, but save new files consisting of each, full CRD resource content beginning with the following lines, removing the prepended - character:

      Example customresourcedefinition.yaml file snippet
      apiVersion: apiextensions.k8s.io/v1beta1
      kind: CustomResourceDefinition
      [...]
    3. For the package file creation, find the following line in the bundle.yaml file:

        packages: |

      Omit this line, but save a new file consisting of the package content beginning with the following lines, removing the prepended - character, and ending with a packageName entry:

      Example package.yaml file
      channels:
      - currentCSV: etcdoperator.v0.9.4
        name: singlenamespace-alpha
      - currentCSV: etcdoperator.v0.9.4-clusterwide
        name: clusterwide-alpha
      defaultChannel: singlenamespace-alpha
      packageName: etcd
  5. Identify images required by the Operators you want to use.

    Inspect the CSV files of each Operator for image: fields to identify the pull specs for any images required by the Operator, and note them for use in a later step.

    For example, in the following deployments spec of an etcd Operator CSV:

      spec:
       serviceAccountName: etcd-operator
       containers:
       - name: etcd-operator
         command:
         - etcd-operator
         - --create-crd=false
         image: quay.io/coreos/etcd-operator@sha256:bd944a211eaf8f31da5e6d69e8541e7cada8f16a9f7a5a570b22478997819943 (1)
         env:
         - name: MY_POD_NAMESPACE
           valueFrom:
             fieldRef:
               fieldPath: metadata.namespace
         - name: MY_POD_NAME
           valueFrom:
             fieldRef:
               fieldPath: metadata.name
    1 Image required by Operator.
  6. Create an Operator catalog image.

    1. Save the following to a Dockerfile, for example named custom-registry.Dockerfile:

      FROM registry.redhat.io/openshift4/ose-operator-registry:v4.2.24 AS builder
      
      COPY manifests manifests
      
      RUN /bin/initializer -o ./bundles.db
      
      FROM registry.access.redhat.com/ubi7/ubi
      
      COPY --from=builder /registry/bundles.db /bundles.db
      COPY --from=builder /usr/bin/registry-server /registry-server
      COPY --from=builder /bin/grpc_health_probe /bin/grpc_health_probe
      
      EXPOSE 50051
      
      ENTRYPOINT ["/registry-server"]
      
      CMD ["--database", "bundles.db"]
    2. Use the podman command to create and tag the container image from the Dockerfile:

      $ podman build -f custom-registry.Dockerfile \
          -t <local_registry_host_name>:<local_registry_host_port>/<namespace>/custom-registry (1)
      1 Tag the image for the internal registry of the restricted network OpenShift Container Platform cluster and any namespace.
  7. Push the Operator catalog image to a registry.

    Your new Operator catalog image must be pushed to a registry that the restricted network OpenShift Container Platform cluster can access. This can be the internal registry of the cluster itself or another registry that the cluster has network access to, such as an on-premise Quay Enterprise registry.

    For this example, login and push the image to the internal registry OpenShift Container Platform cluster:

    $ podman push <local_registry_host_name>:<local_registry_host_port>/<namespace>/custom-registry
  8. Create a CatalogSource pointing to the new Operator catalog image.

    1. Save the following to a file, for example my-operator-catalog.yaml:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace
      spec:
        displayName: My Operator Catalog
        sourceType: grpc
        image: <local_registry_host_name>:<local_registry_host_port>/<namespace>/custom-registry:latest
    2. Create the CatalogSource resource:

      $ oc create -f my-operator-catalog.yaml
    3. Verify the CatalogSource and package manifest are created successfully:

      # oc get pods -n openshift-marketplace
      NAME READY STATUS RESTARTS AGE
      my-operator-catalog-6njx6 1/1 Running 0 28s
      marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h
      
      # oc get catalogsource -n openshift-marketplace
      NAME DISPLAY TYPE PUBLISHER AGE
      my-operator-catalog My Operator Catalog grpc 5s
      
      # oc get packagemanifest -n openshift-marketplace
      NAME CATALOG AGE
      etcd My Operator Catalog 34s

      You should also be able to view them from the OperatorHub page in the web console.

  9. Mirror the images required by the Operators you want to use.

    1. Determine the images defined by the Operator(s) that you are expecting. This example uses the etcd Operator, requiring the quay.io/coreos/etcd-operator image.

      This procedure only shows mirroring Operator images themselves and not Operand images, which are the components that an Operator manages. Operand images must be mirrored as well; see each Operator’s documentation to identify the required Operand images.

    2. To use mirrored images, you must first create an ImageContentSourcePolicy for each image to change the source location of the Operator catalog image. For example:

      apiVersion: operator.openshift.io/v1alpha1
      kind: ImageContentSourcePolicy
      metadata:
        name: etcd-operator
      spec:
        repositoryDigestMirrors:
        - mirrors:
          - <local_registry_host_name>:<local_registry_host_port>/coreos/etcd-operator
          source: quay.io/coreos/etcd-operator
    3. Use the oc image mirror command from your workstation without network restrictions to pull the image from the source registry and push to the internal registry without being stored locally:

      $ oc image mirror quay.io/coreos/etcd-operator \
          <local_registry_host_name>:<local_registry_host_port>/coreos/etcd-operator

You can now install the Operator from the OperatorHub on your restricted network OpenShift Container Platform cluster.

Installing the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 target cluster in a restricted environment

You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 target cluster with the Operation Lifecycle Manager (OLM).

The Cluster Application Migration Operator installs the Cluster Application Migration tool on the target cluster by default.

Prerequisites
  • You created a custom Operator catalog and pushed it to a mirror registry.

  • You configured OLM to install the Cluster Application Migration Operator from the mirror registry.

Procedure
  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.

  2. Use the Filter by keyword field (in this case, Migration) to find the Cluster Application Migration Operator.

  3. Select the Cluster Application Migration Operator and click Install.

  4. On the Create Operator Subscription page, select the openshift-migration namespace, and specify an approval strategy.

  5. Click Subscribe.

    On the Installed Operators page, the Cluster Application Migration Operator appears in the openshift-migration project with the status InstallSucceeded.

  6. Under Provided APIs, click View 12 more…​.

  7. Click Create NewMigrationController.

  8. Click Create.

  9. Click WorkloadsPods to verify that the Controller Manager, Migration UI, Restic, and Velero Pods are running.

Installing the Cluster Application Migration Operator on an OpenShift Container Platform 4.2 source cluster in a restricted environment

You can install the Cluster Application Migration Operator on an OpenShift Container Platform 4 source cluster with the Operation Lifecycle Manager (OLM).

Prerequisites
  • You created a custom Operator catalog and pushed it to a mirror registry.

  • You configured OLM to install the Cluster Application Migration Operator from the mirror registry.

Procedure
  1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.

  2. Use the Filter by keyword field (in this case, Migration) to find the Cluster Application Migration Operator.

  3. Select the Cluster Application Migration Operator and click Install.

  4. On the Create Operator Subscription page, select the openshift-migration namespace, and specify an approval strategy.

  5. Click Subscribe.

    On the Installed Operators page, the Cluster Application Migration Operator appears in the openshift-migration project with the status InstallSucceeded.

  6. Under Provided APIs, click View 12 more…​.

  7. Click Create NewMigrationController.

  8. Click Create.

Launching the CAM web console

You can launch the CAM web console in a browser.

Procedure
  1. Log in to the OpenShift Container Platform cluster on which you have installed the CAM tool.

  2. Obtain the CAM web console URL by entering the following command:

    $ oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'

    The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com.

  3. Launch a browser and navigate to the CAM web console.

    If you try to access the CAM web console immediately after installing the Cluster Application Migration Operator, the console may not load because the Operator is still configuring the cluster. Wait a few minutes and retry.

  4. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster’s API server. The web page guides you through the process of accepting the remaining certificates.

  5. Log in with your OpenShift Container Platform username and password.