This is a cache of https://docs.openshift.com/container-platform/4.9/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.html. It is a snapshot of the page at 2024-11-25T18:30:40.407+0000.
Migrating from the OpenShift SDN cluster network provider - OVN-Kubernetes default CNI network provider | Networking | OpenShift Container Platform 4.9
×

As a cluster administrator, you can migrate to the OVN-Kubernetes Container Network Interface (CNI) cluster network provider from the OpenShift SDN CNI cluster network provider.

To learn more about OVN-Kubernetes, read About the OVN-Kubernetes network provider.

Migration to the OVN-Kubernetes network provider

Migrating to the OVN-Kubernetes Container Network Interface (CNI) cluster network provider is a manual process that includes some downtime during which your cluster is unreachable. Although a rollback procedure is provided, the migration is intended to be a one-way process.

A migration to the OVN-Kubernetes cluster network provider is supported on the following platforms:

  • Bare metal hardware

  • Amazon Web Services (AWS)

  • Google Cloud Platform (GCP)

  • Microsoft Azure

  • Red Hat OpenStack Platform (RHOSP)

  • Red Hat Virtualization (RHV)

  • VMware vSphere

Migrating to or from the OVN-Kubernetes network plugin is not supported for managed OpenShift cloud services such as OpenShift Dedicated and Red Hat OpenShift Service on AWS (ROSA).

Considerations for migrating to the OVN-Kubernetes network provider

If you have more than 150 nodes in your OpenShift Container Platform cluster, then open a support case for consultation on your migration to the OVN-Kubernetes network plugin.

The subnets assigned to nodes and the IP addresses assigned to individual pods are not preserved during the migration.

While the OVN-Kubernetes network provider implements many of the capabilities present in the OpenShift SDN network provider, the configuration is not the same.

  • If your cluster uses any of the following OpenShift SDN capabilities, you must manually configure the same capability in OVN-Kubernetes:

    • Namespace isolation

    • Egress IP addresses

    • Egress network policies

    • Egress router pods

    • Multicast

  • If your cluster uses any part of the 100.64.0.0/16 IP address range, you cannot migrate to OVN-Kubernetes because it uses this IP address range internally.

The following sections highlight the differences in configuration between the aforementioned capabilities in OVN-Kubernetes and OpenShift SDN.

Namespace isolation

OVN-Kubernetes supports only the network policy isolation mode.

If your cluster uses OpenShift SDN configured in either the multitenant or subnet isolation modes, you cannot migrate to the OVN-Kubernetes network provider.

Egress IP addresses

The differences in configuring an egress IP address between OVN-Kubernetes and OpenShift SDN is described in the following table:

Table 1. Differences in egress IP address configuration
OVN-Kubernetes OpenShift SDN
  • Create an EgressIPs object

  • Add an annotation on a Node object

  • Patch a NetNamespace object

  • Patch a HostSubnet object

For more information on using egress IP addresses in OVN-Kubernetes, see "Configuring an egress IP address".

Egress network policies

The difference in configuring an egress network policy, also known as an egress firewall, between OVN-Kubernetes and OpenShift SDN is described in the following table:

Table 2. Differences in egress network policy configuration
OVN-Kubernetes OpenShift SDN
  • Create an EgressFirewall object in a namespace

  • Create an EgressNetworkPolicy object in a namespace

For more information on using an egress firewall in OVN-Kubernetes, see "Configuring an egress firewall for a project".

Egress router pods

OVN-Kubernetes supports egress router pods in redirect mode. OVN-Kubernetes does not support egress router pods in HTTP proxy mode or DNS proxy mode.

When you deploy an egress router with the Cluster Network Operator, you cannot specify a node selector to control which node is used to host the egress router pod.

Multicast

The difference between enabling multicast traffic on OVN-Kubernetes and OpenShift SDN is described in the following table:

Table 3. Differences in multicast configuration
OVN-Kubernetes OpenShift SDN
  • Add an annotation on a Namespace object

  • Add an annotation on a NetNamespace object

For more information on using multicast in OVN-Kubernetes, see "Enabling multicast for a project".

Network policies

OVN-Kubernetes fully supports the Kubernetes NetworkPolicy API in the networking.k8s.io/v1 API group. No changes are necessary in your network policies when migrating from OpenShift SDN.

How the migration process works

The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response.

Table 4. Migrating to OVN-Kubernetes from OpenShift SDN
User-initiated steps Migration activity

Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OVNKubernetes. Make sure the migration field is null before setting it to a value.

Cluster Network Operator (CNO)

Updates the status of the Network.config.openshift.io CR named cluster accordingly.

Machine Config Operator (MCO)

Rolls out an update to the systemd configuration necessary for OVN-Kubernetes; The MCO updates a single machine per pool at a time by default, causing the total time the migration takes to increase with the size of the cluster.

Update the networkType field of the Network.config.openshift.io CR.

CNO

Performs the following actions:

  • Destroys the OpenShift SDN control plane pods.

  • Deploys the OVN-Kubernetes control plane pods.

  • Updates the Multus objects to reflect the new cluster network provider.

Reboot each node in the cluster.

Cluster

As nodes reboot, the cluster assigns IP addresses to pods on the OVN-Kubernetes cluster network.

If a rollback to OpenShift SDN is required, the following table describes the process.

Table 5. Performing a rollback to OpenShift SDN
User-initiated steps Migration activity

Suspend the MCO to ensure that it does not interrupt the migration.

The MCO stops.

Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OpenShiftSDN. Make sure the migration field is null before setting it to a value.

CNO

Updates the status of the Network.config.openshift.io CR named cluster accordingly.

Update the networkType field.

CNO

Performs the following actions:

  • Destroys the OVN-Kubernetes control plane pods.

  • Deploys the OpenShift SDN control plane pods.

  • Updates the Multus objects to reflect the new cluster network provider.

Reboot each node in the cluster.

Cluster

As nodes reboot, the cluster assigns IP addresses to pods on the OpenShift-SDN network.

Enable the MCO after all nodes in the cluster reboot.

MCO

Rolls out an update to the systemd configuration necessary for OpenShift SDN; The MCO updates a single machine per pool at a time by default, so the total time the migration takes increases with the size of the cluster.

Migrating to the OVN-Kubernetes default CNI network provider

As a cluster administrator, you can change the default Container Network Interface (CNI) network provider for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster.

While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable.

Prerequisites
  • A cluster configured with the OpenShift SDN CNI cluster network provider in the network policy isolation mode.

  • Install the OpenShift CLI (oc).

  • Access to the cluster as a user with the cluster-admin role.

  • A recent backup of the etcd database is available.

  • A reboot can be triggered manually for each node.

  • The cluster is in a known good state, without any errors.

Procedure
  1. To backup the configuration for the cluster network, enter the following command:

    $ oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml
  2. To prepare all the nodes for the migration, set the migration field on the Cluster Network Operator configuration object by entering the following command:

    $ oc patch Network.operator.openshift.io cluster --type='merge' \
      --patch '{ "spec": { "migration": {"networkType": "OVNKubernetes" } } }'

    This step does not deploy OVN-Kubernetes immediately. Instead, specifying the migration field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster in preparation for the OVN-Kubernetes deployment.

  3. Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements:

    • Maximum transmission unit (MTU)

    • Geneve (Generic Network Virtualization Encapsulation) overlay network port

    To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch.

    $ oc patch Network.operator.openshift.io cluster --type=merge \
      --patch '{
        "spec":{
          "defaultNetwork":{
            "ovnKubernetesConfig":{
              "mtu":<mtu>,
              "genevePort":<port>
        }}}}'
    mtu

    The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 100 less than the smallest node MTU value.

    port

    The UDP port for the Geneve overlay network. If a value is not specified, the default is 6081. The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is 4789.

    Example patch command to update mtu field
    $ oc patch Network.operator.openshift.io cluster --type=merge \
      --patch '{
        "spec":{
          "defaultNetwork":{
            "ovnKubernetesConfig":{
              "mtu":1200
        }}}}'
  4. As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:

    $ oc get mcp

    A successfully updated node has the following status: UPDATED=true, UPDATING=false, DEGRADED=false.

    By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster.

  5. Confirm the status of the new machine configuration on the hosts:

    1. To list the machine configuration state and the name of the applied machine configuration, enter the following command:

      $ oc describe node | egrep "hostname|machineconfig"
      Example output
      kubernetes.io/hostname=master-0
      machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b
      machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b
      machineconfiguration.openshift.io/reason:
      machineconfiguration.openshift.io/state: Done

      Verify that the following statements are true:

      • The value of machineconfiguration.openshift.io/state field is Done.

      • The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field.

    2. To confirm that the machine config is correct, enter the following command:

      $ oc get machineconfig <config_name> -o yaml | grep ExecStart

      where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field.

      The machine config must include the following update to the systemd configuration:

      ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes
    3. If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors.

      1. To list the pods, enter the following command:

        $ oc get pod -n openshift-machine-config-operator
        Example output
        NAME                                         READY   STATUS    RESTARTS   AGE
        machine-config-controller-75f756f89d-sjp8b   1/1     Running   0          37m
        machine-config-daemon-5cf4b                  2/2     Running   0          43h
        machine-config-daemon-7wzcd                  2/2     Running   0          43h
        machine-config-daemon-fc946                  2/2     Running   0          43h
        machine-config-daemon-g2v28                  2/2     Running   0          43h
        machine-config-daemon-gcl4f                  2/2     Running   0          43h
        machine-config-daemon-l5tnv                  2/2     Running   0          43h
        machine-config-operator-79d9c55d5-hth92      1/1     Running   0          37m
        machine-config-server-bsc8h                  1/1     Running   0          43h
        machine-config-server-hklrm                  1/1     Running   0          43h
        machine-config-server-k9rtx                  1/1     Running   0          43h

        The names for the config daemon pods are in the following format: machine-config-daemon-<seq>. The <seq> value is a random five character alphanumeric sequence.

      2. Display the pod log for the first machine config daemon pod shown in the previous output by enter the following command:

        $ oc logs <pod> -n openshift-machine-config-operator

        where pod is the name of a machine config daemon pod.

      3. Resolve any errors in the logs shown by the output from the previous command.

  6. To start the migration, configure the OVN-Kubernetes cluster network provider by using one of the following commands:

    • To specify the network provider without changing the cluster network IP address block, enter the following command:

      $ oc patch Network.config.openshift.io cluster \
        --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }'
    • To specify a different cluster network IP address block, enter the following command:

      $ oc patch Network.config.openshift.io cluster \
        --type='merge' --patch '{
          "spec": {
            "clusterNetwork": [
              {
                "cidr": "<cidr>",
                "hostPrefix": <prefix>
              }
            ],
            "networkType": "OVNKubernetes"
          }
        }'

      where cidr is a CIDR block and prefix is the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with the 100.64.0.0/16 CIDR block because the OVN-Kubernetes network provider uses this block internally.

      You cannot change the service network address block during the migration.

  7. Verify that the Multus daemon set rollout is complete before continuing with subsequent steps:

    $ oc -n openshift-multus rollout status daemonset/multus

    The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart.

    Example output
    Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated...
    ...
    Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available...
    daemon set "multus" successfully rolled out
  8. To complete the migration, reboot each node in your cluster. For example, you can use a bash script similar to the following example. The script assumes that you can connect to each host by using ssh and that you have configured sudo to not prompt for a password.

    #!/bin/bash
    
    for ip in $(oc get nodes  -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}')
    do
       echo "reboot node $ip"
       ssh -o StrictHostKeyChecking=no core@$ip sudo shutdown -r -t 3
    done

    If ssh access is not available, you might be able to reboot each node through the management portal for your infrastructure provider.

  9. Confirm that the migration succeeded:

    1. To confirm that the CNI cluster network provider is OVN-Kubernetes, enter the following command. The value of status.networkType must be OVNKubernetes.

      $ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
    2. To confirm that the cluster nodes are in the Ready state, enter the following command:

      $ oc get nodes
    3. To confirm that your pods are not in an error state, enter the following command:

      $ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'

      If pods on a node are in an error state, reboot that node.

    4. To confirm that all of the cluster Operators are not in an abnormal state, enter the following command:

      $ oc get co

      The status of every cluster Operator must be the following: AVAILABLE="True", PROGRESSING="False", DEGRADED="False". If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information.

  10. Complete the following steps only if the migration succeeds and your cluster is in a good state:

    1. To remove the migration configuration from the CNO configuration object, enter the following command:

      $ oc patch Network.operator.openshift.io cluster --type='merge' \
        --patch '{ "spec": { "migration": null } }'
    2. To remove custom configuration for the OpenShift SDN network provider, enter the following command:

      $ oc patch Network.operator.openshift.io cluster --type='merge' \
        --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }'
    3. To remove the OpenShift SDN network provider namespace, enter the following command:

      $ oc delete namespace openshift-sdn