MTC 1.7.z
Legacy 1.7 operator: Install manually with the operator.yml
file.
This cluster cannot be the control cluster. |
You can install the Migration Toolkit for Containers (MTC) on OKD 3 and 4 in a restricted network environment by performing the following procedures:
Create a mirrored Operator catalog.
This process creates a mapping.txt
file, which contains the mapping between the registry.redhat.io
image and your mirror registry image. The mapping.txt
file is required for installing the Operator on the source cluster.
Install the Migration Toolkit for Containers Operator on the OKD 4.8 target cluster by using Operator Lifecycle Manager.
By default, the MTC web console and the Migration Controller
pod run on the target cluster. You can configure the Migration Controller
custom resource manifest to run the MTC web console and the Migration Controller
pod on a source cluster or on a remote cluster.
Install the legacy Migration Toolkit for Containers Operator on the OKD 3 source cluster from the command line interface.
Configure object storage to use as a replication repository.
To uninstall MTC, see Uninstalling MTC and deleting resources.
You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OKD version.
OKD 4.5 and earlier.
OKD 4.6 and later.
The MTC Operator designed for legacy platforms.
The MTC Operator designed for modern platforms.
The cluster that runs the MTC controller and GUI.
A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations.
OKD 4.5 or earlier | OKD 4.6 or later | |||
---|---|---|---|---|
Stable MTC version |
MTC 1.7.z Legacy 1.7 operator: Install manually with the
|
MTC 1.7.z Install with OLM, release channel |
Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OKD 3.11 cluster on premises to a modern OKD cluster in the cloud, where the modern cluster cannot connect to the OKD 3.11 cluster. With MTC 1.7, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. |
You install the Migration Toolkit for Containers Operator on OKD 4.8 by using the Operator Lifecycle Manager.
You must be logged in as a user with cluster-admin
privileges on all clusters.
You must create an Operator catalog from a mirror image in a local registry.
In the OKD web console, click Operators → OperatorHub.
Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
Select the Migration Toolkit for Containers Operator and click Install.
Click Install.
On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.
Click Migration Toolkit for Containers Operator.
Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
Click Create.
Click Workloads → Pods to verify that the MTC pods are running.
You can install the legacy Migration Toolkit for Containers Operator manually on OKD 3.
You must be logged in as a user with cluster-admin
privileges on all clusters.
You must have access to registry.redhat.io
.
You must have podman
installed.
You must create an image stream secret and copy it to each node in the cluster.
You must have a Linux workstation with network access in order to download files from registry.redhat.io
.
You must create a mirror image of the Operator catalog.
You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OKD 4.8.
Log in to registry.redhat.io
with your Red Hat Customer Portal credentials:
$ sudo podman login registry.redhat.io
Download the operator.yml
file by entering the following command:
$ sudo podman cp $(sudo podman create \
registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./
Download the controller.yml
file by entering the following command:
$ sudo podman cp $(sudo podman create \
registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./
Obtain the Operator image mapping by running the following command:
$ grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc
The mapping.txt
file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io
image and your mirror registry image.
registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator
Update the image
values for the ansible
and operator
containers and the REGISTRY
value in the operator.yml
file:
containers:
- name: ansible
image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> (1)
...
- name: operator
image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> (1)
...
env:
- name: REGISTRY
value: <registry.apps.example.com> (2)
1 | Specify your mirror registry and the sha256 value of the Operator image. |
2 | Specify your mirror registry. |
Log in to your source cluster.
Create the Migration Toolkit for Containers Operator object:
$ oc create -f operator.yml
namespace/openshift-migration created
rolebinding.rbac.authorization.k8s.io/system:deployers created
serviceaccount/migration-operator created
customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created
role.rbac.authorization.k8s.io/migration-operator created
rolebinding.rbac.authorization.k8s.io/migration-operator created
clusterrolebinding.rbac.authorization.k8s.io/migration-operator created
deployment.apps/migration-operator created
Error from server (AlreadyExists): error when creating "./operator.yml":
rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists (1)
Error from server (AlreadyExists): error when creating "./operator.yml":
rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists
1 | You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OKD 4 that are provided in later releases. |
Create the MigrationController
object:
$ oc create -f controller.yml
Verify that the MTC pods are running:
$ oc get pods -n openshift-migration
For OKD 4.1 and earlier versions, you must configure proxies in the MigrationController
custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy
object.
For OKD 4.2 to 4.8, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.
Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.
If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy.
You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy
variable in the MigrationController
CR to use the proxy:
apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: migration-controller
namespace: openshift-migration
spec:
[...]
stunnel_tcp_proxy: http://username:password@ip:port
Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC.
You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel.
Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy.
Upgrade request required
The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required
.
Workaround: Use a proxy that supports the SPDY protocol.
In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade
HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade
header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required
.
Workaround: Ensure that the proxy forwards the Upgrade
header.
OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration.
Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions.
You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy
configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress-from-rsync-pods
spec:
podSelector:
matchLabels:
owner: directvolumemigration
app: directvolumemigration-rsync-transfer
egress:
- {}
policyTypes:
- Egress
The EgressNetworkPolicy
object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster.
Unlike the NetworkPolicy
object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters.
Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two:
apiVersion: network.openshift.io/v1
kind: EgressNetworkPolicy
metadata:
name: test-egress-policy
namespace: <namespace>
spec:
egress:
- to:
cidrSelector: <cidr_of_source_or_target_cluster>
type: Deny
When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access:
Variable | Type | Default | Description |
---|---|---|---|
|
string |
Not set |
Comma-separated list of supplemental groups for source Rsync pods |
|
string |
Not set |
Comma-separated list of supplemental groups for target Rsync pods |
The MigrationController
CR can be updated to set values for these supplemental groups:
spec:
src_supplemental_groups: "1000,2000"
target_supplemental_groups: "2000,3000"
You must be logged in as a user with cluster-admin
privileges on all clusters.
Get the MigrationController
CR manifest:
$ oc get migrationcontroller <migration_controller> -n openshift-migration
Update the proxy parameters:
apiVersion: migration.openshift.io/v1alpha1
kind: MigrationController
metadata:
name: <migration_controller>
namespace: openshift-migration
...
spec:
stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> (1)
noProxy: example.com (2)
1 | Stunnel proxy URL for direct volume migration. |
2 | Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. |
Preface a domain with .
to match subdomains only. For example, .y.com
matches x.y.com
, but not y.com
. Use *
to bypass proxy for all destinations.
If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr
field from the installation configuration, you must add them to this list to prevent connection issues.
This field is ignored if neither the httpProxy
nor the httpsProxy
field is set.
Save the manifest as migration-controller.yaml
.
Apply the updated manifest:
$ oc replace -f migration-controller.yaml -n openshift-migration
For more information, see Configuring the cluster-wide proxy.
You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. Multi-Cloud Object Gateway (MCG) is the only supported option for a restricted network environment.
MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
All clusters must have uninterrupted network access to the replication repository.
If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository.
You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
You can install the OpenShift Container Storage Operator from OperatorHub.
See Disconnected environment in Red Hat OpenShift Container Storage: Planning your deployment for more information.
Ensure that you have downloaded the pull secret from the Red Hat OpenShift Cluster Manager as shown in Obtaining the installation program in the installation documentation for your platform.
If you have the pull secret, add the redhat-operators
catalog to the OperatorHub custom resource (CR) as shown in Configuring OKD to use Red Hat Operators.
In the OKD web console, click Operators → OperatorHub.
Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.
Select the OpenShift Container Storage Operator and click Install.
Select an Update Channel, Installation Mode, and Approval Strategy.
Click Install.
On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.
You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s custom resources (CRs).
Log in to the OKD cluster:
$ oc login -u <username>
Create the NooBaa
CR configuration file, noobaa.yml
, with the following content:
apiVersion: noobaa.io/v1alpha1
kind: NooBaa
metadata:
name: <noobaa>
namespace: openshift-storage
spec:
dbResources:
requests:
cpu: 0.5 (1)
memory: 1Gi
coreResources:
requests:
cpu: 0.5 (1)
memory: 1Gi
1 | For a very small cluster, you can change the value to 0.1 . |
Create the NooBaa
object:
$ oc create -f noobaa.yml
Create the BackingStore
CR configuration file, bs.yml
, with the following content:
apiVersion: noobaa.io/v1alpha1
kind: BackingStore
metadata:
finalizers:
- noobaa.io/finalizer
labels:
app: noobaa
name: <mcg_backing_store>
namespace: openshift-storage
spec:
pvPool:
numVolumes: 3 (1)
resources:
requests:
storage: <volume_size> (2)
storageClass: <storage_class> (3)
type: pv-pool
1 | Specify the number of volumes in the persistent volume pool. |
2 | Specify the size of the volumes, for example, 50Gi . |
3 | Specify the storage class, for example, gp2 . |
Create the BackingStore
object:
$ oc create -f bs.yml
Create the BucketClass
CR configuration file, bc.yml
, with the following content:
apiVersion: noobaa.io/v1alpha1
kind: BucketClass
metadata:
labels:
app: noobaa
name: <mcg_bucket_class>
namespace: openshift-storage
spec:
placementPolicy:
tiers:
- backingStores:
- <mcg_backing_store>
placement: Spread
Create the BucketClass
object:
$ oc create -f bc.yml
Create the ObjectBucketClaim
CR configuration file, obc.yml
, with the following content:
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: <bucket>
namespace: openshift-storage
spec:
bucketName: <bucket> (1)
storageClassName: <storage_class>
additionalConfig:
bucketclass: <mcg_bucket_class>
1 | Record the bucket name for adding the replication repository to the MTC web console. |
Create the ObjectBucketClaim
object:
$ oc create -f obc.yml
Watch the resource creation process to verify that the ObjectBucketClaim
status is Bound
:
$ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'
This process can take five to ten minutes.
Obtain and record the following values, which are required when you add the replication repository to the MTC web console:
S3 endpoint:
$ oc get route -n openshift-storage s3
S3 provider access key:
$ oc get secret -n openshift-storage migstorage \
-o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 --decode
S3 provider secret access key:
$ oc get secret -n openshift-storage migstorage \
-o go-template='{{ .data.AWS_secret_ACCESS_KEY }}' | base64 --decode
You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster.
Deleting the |
You must be logged in as a user with cluster-admin
privileges.
Delete the MigrationController
custom resource (CR) on all clusters:
$ oc delete migrationcontroller <migration_controller>
Uninstall the Migration Toolkit for Containers Operator on OKD 4 by using the Operator Lifecycle Manager.
Delete cluster-scoped resources on all clusters by running the following commands:
migration
custom resource definitions (CRDs):
$ oc delete $(oc get crds -o name | grep 'migration.openshift.io')
velero
CRDs:
$ oc delete $(oc get crds -o name | grep 'velero')
migration
cluster roles:
$ oc delete $(oc get clusterroles -o name | grep 'migration.openshift.io')
migration-operator
cluster role:
$ oc delete clusterrole migration-operator
velero
cluster roles:
$ oc delete $(oc get clusterroles -o name | grep 'velero')
migration
cluster role bindings:
$ oc delete $(oc get clusterrolebindings -o name | grep 'migration.openshift.io')
migration-operator
cluster role bindings:
$ oc delete clusterrolebindings migration-operator
velero
cluster role bindings:
$ oc delete $(oc get clusterrolebindings -o name | grep 'velero')