This is a cache of https://docs.okd.io/4.14/networking/ovn_kubernetes_network_provider/migrate-from-kuryr-sdn.html. It is a snapshot of the page at 2024-11-25T05:47:22.180+0000.
Migrating from Kuryr - OVN-Kubernetes network plugin | Networking | OKD 4.14
×

As the administrator of a cluster that runs on OpenStack, you can migrate to the OVN-Kubernetes network plugin from the Kuryr SDN network plugin.

To learn more about OVN-Kubernetes, read About the OVN-Kubernetes network plugin.

Migration to the OVN-Kubernetes network provider

You can manually migrate a cluster that runs on OpenStack to the OVN-Kubernetes network provider.

Migration to OVN-Kubernetes is a one-way process. During migration, your cluster will be unreachable for a brief time.

Considerations when migrating to the OVN-Kubernetes network provider

Kubernetes namespaces are kept by Kuryr in separate OpenStack networking service (Neutron) subnets. Those subnets and the IP addresses that are assigned to individual pods are not preserved during the migration.

How the migration process works

The following table summarizes the migration process by relating the steps that you perform with the actions that your cluster and Operators take.

Table 1. The Kuryr to OVN-Kubernetes migration process
User-initiated steps Migration activity

Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OVNKubernetes. Verify that the value of the migration field prints the null value before setting it to another value.

Cluster Network Operator (CNO)

Updates the status of the Network.config.openshift.io CR named cluster accordingly.

Machine Config Operator (MCO)

Deploys an update to the systemd configuration that is required by OVN-Kubernetes. By default, the MCO updates a single machine per pool at a time. As a result, large clusters have longer migration times.

Update the networkType field of the Network.config.openshift.io CR.

CNO

Performs the following actions:

  • Destroys the Kuryr control plane pods: Kuryr CNIs and the Kuryr controller.

  • Deploys the OVN-Kubernetes control plane pods.

  • Updates the Multus objects to reflect the new network plugin.

Reboot each node in the cluster.

Cluster

As nodes reboot, the cluster assigns IP addresses to pods on the OVN-Kubernetes cluster network.

Clean up remaining resources Kuryr controlled.

Cluster

Holds OpenStack resources that need to be freed, as well as OKD resources to configure.

Migrating to the OVN-Kubernetes network plugin

As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes.

During the migration, you must reboot every node in your cluster. Your cluster is unavailable and workloads might be interrupted. Perform the migration only if an interruption in service is acceptable.

Prerequisites
  • You installed the OpenShift CLI (oc).

  • You have access to the cluster as a user with the cluster-admin role.

  • You have a recent backup of the etcd database is available.

  • You can manually reboot each node.

  • The cluster you plan to migrate is in a known good state, without any errors.

  • You installed the Python interpreter.

  • You installed the openstacksdk python package.

  • You installed the openstack CLI tool.

  • You have access to the underlying OpenStack cloud.

Procedure
  1. Back up the configuration for the cluster network by running the following command:

    $ oc get Network.config.openshift.io cluster -o yaml > cluster-kuryr.yaml
  2. To set the CLUSTERID variable, run the following command:

    $ CLUSTERID=$(oc get infrastructure.config.openshift.io cluster -o=jsonpath='{.status.infrastructureName}')
  3. To prepare all the nodes for the migration, set the migration field on the Cluster Network Operator configuration object by running the following command:

    $ oc patch Network.operator.openshift.io cluster --type=merge \
        --patch '{"spec": {"migration": {"networkType": "OVNKubernetes"}}}'

    This step does not deploy OVN-Kubernetes immediately. Specifying the migration field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster. This prepares the cluster for the OVN-Kubernetes deployment.

  4. Optional: Customize the following settings for OVN-Kubernetes for your network infrastructure requirements:

    • Maximum transmission unit (MTU)

    • Geneve (Generic Network Virtualization Encapsulation) overlay network port

    • OVN-Kubernetes IPv4 internal subnet

    • OVN-Kubernetes IPv6 internal subnet

    To customize these settings, enter and customize the following command:

    $ oc patch Network.operator.openshift.io cluster --type=merge \
      --patch '{
        "spec":{
          "defaultNetwork":{
            "ovnKubernetesConfig":{
              "mtu":<mtu>,
              "genevePort":<port>,
              "v4InternalSubnet":"<ipv4_subnet>",
              "v6InternalSubnet":"<ipv6_subnet>"
        }}}}'

    where:

    mtu

    Specifies the MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 100 less than the smallest node MTU value.

    port

    Specifies the UDP port for the Geneve overlay network. If a value is not specified, the default is 6081. The port cannot be the same as the VXLAN port that is used by Kuryr. The default value for the VXLAN port is 4789.

    ipv4_subnet

    Specifies an IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OKD installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is 100.64.0.0/16.

    ipv6_subnet

    Specifies an IPv6 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OKD installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/48.

    If you do not need to change the default value, omit the key from the patch.

    Example patch command to update mtu field
    $ oc patch Network.operator.openshift.io cluster --type=merge \
      --patch '{
        "spec":{
          "defaultNetwork":{
            "ovnKubernetesConfig":{
              "mtu":1200
        }}}}'
  5. Check the machine config pool status by entering the following command:

    $ oc get mcp

    While the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated before continuing.

    A successfully updated node has the following status: UPDATED=true, UPDATING=false, DEGRADED=false.

    By default, the MCO updates one machine per pool at a time. Large clusters take more time to migrate than small clusters.

  6. Confirm the status of the new machine configuration on the hosts:

    1. To list the machine configuration state and the name of the applied machine configuration, enter the following command:

      $ oc describe node | egrep "hostname|machineconfig"
      Example output
      kubernetes.io/hostname=master-0
      machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b (2)
      machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b (3)
      machineconfiguration.openshift.io/reason:
      machineconfiguration.openshift.io/state: Done
    2. Review the output from the previous step. The following statements must be true:

      • The value of machineconfiguration.openshift.io/state field is Done.

      • The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field.

    3. To confirm that the machine config is correct, enter the following command:

      $ oc get machineconfig <config_name> -o yaml | grep ExecStart

      where:

      <config_name>

      Specifies the name of the machine config from the machineconfiguration.openshift.io/currentConfig field.

      The machine config must include the following update to the systemd configuration:

      Example output
      ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes
    4. If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors:

      1. To list the pods, enter the following command:

        $ oc get pod -n openshift-machine-config-operator
        Example output
        NAME                                         READY   STATUS    RESTARTS   AGE
        machine-config-controller-75f756f89d-sjp8b   1/1     Running   0          37m
        machine-config-daemon-5cf4b                  2/2     Running   0          43h
        machine-config-daemon-7wzcd                  2/2     Running   0          43h
        machine-config-daemon-fc946                  2/2     Running   0          43h
        machine-config-daemon-g2v28                  2/2     Running   0          43h
        machine-config-daemon-gcl4f                  2/2     Running   0          43h
        machine-config-daemon-l5tnv                  2/2     Running   0          43h
        machine-config-operator-79d9c55d5-hth92      1/1     Running   0          37m
        machine-config-server-bsc8h                  1/1     Running   0          43h
        machine-config-server-hklrm                  1/1     Running   0          43h
        machine-config-server-k9rtx                  1/1     Running   0          43h

        The names for the config daemon pods are in the following format: machine-config-daemon-<seq>. The <seq> value is a random five character alphanumeric sequence.

      2. Display the pod log for the first machine config daemon pod shown in the previous output by enter the following command:

        $ oc logs <pod> -n openshift-machine-config-operator

        where:

        <pod>

        Specifies the name of a machine config daemon pod.

      3. Resolve any errors in the logs shown by the output from the previous command.

  7. To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands:

    • To specify the network provider without changing the cluster network IP address block, enter the following command:

      $ oc patch Network.config.openshift.io cluster --type=merge \
          --patch '{"spec": {"networkType": "OVNKubernetes"}}'
    • To specify a different cluster network IP address block, enter the following command:

      $ oc patch Network.config.openshift.io cluster \
        --type='merge' --patch '{
          "spec": {
            "clusterNetwork": [
              {
                "cidr": "<cidr>",
                "hostPrefix": "<prefix>"
              }
            ]
            "networkType": "OVNKubernetes"
          }
        }'

      where:

      <cidr>

      Specifies a CIDR block.

      <prefix>

      Specifies a slice of the CIDR block that is apportioned to each node in your cluster.

      You cannot change the service network address block during the migration.

      You cannot use any CIDR block that overlaps with the 100.64.0.0/16 CIDR block because the OVN-Kubernetes network provider uses this block internally.

  8. To complete the migration, reboot each node in your cluster. For example, you can use a bash script similar to the following example. The script assumes that you can connect to each host by using ssh and that you have configured sudo to not prompt for a password:

    #!/bin/bash
    
    for ip in $(oc get nodes  -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}')
    do
       echo "reboot node $ip"
       ssh -o StrictHostKeyChecking=no core@$ip sudo shutdown -r -t 3
    done

    If SSH access is not available, you can use the openstack command:

    $ for name in $(openstack server list --name "${CLUSTERID}*" -f value -c Name); do openstack server reboot "${name}"; done

    Alternatively, you might be able to reboot each node through the management portal for your infrastructure provider. Otherwise, contact the appropriate authority to either gain access to the virtual machines through SSH or the management portal and OpenStack client.

Verification
  1. Confirm that the migration succeeded, and then remove the migration resources:

    1. To confirm that the network plugin is OVN-Kubernetes, enter the following command.

      $ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'

      The value of status.networkType must be OVNKubernetes.

    2. To confirm that the cluster nodes are in the Ready state, enter the following command:

      $ oc get nodes
    3. To confirm that your pods are not in an error state, enter the following command:

      $ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'

      If pods on a node are in an error state, reboot that node.

    4. To confirm that all of the cluster Operators are not in an abnormal state, enter the following command:

      $ oc get co

      The status of every cluster Operator must be the following: AVAILABLE="True", PROGRESSING="False", DEGRADED="False". If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information.

      Do not proceed if any of the previous verification steps indicate errors. You might encounter pods that have a Terminating state due to finalizers that are removed during clean up. They are not an error indication.

  2. If the migration completed and your cluster is in a good state, remove the migration configuration from the CNO configuration object by entering the following command:

    $ oc patch Network.operator.openshift.io cluster --type=merge \
        --patch '{"spec": {"migration": null}}'

Cleaning up resources after migration

After migration from the Kuryr network plugin to the OVN-Kubernetes network plugin, you must clean up the resources that Kuryr created previously.

The clean up process relies on a Python virtual environment to ensure that the package versions that you use support tags for Octavia objects. You do not need a virtual environment if you are certain that your environment uses at minimum:

  • The openstacksdk Python package version 0.54.0

  • The python-openstackclient Python package version 5.5.0

  • The python-octaviaclient Python package version 2.3.0

If you decide to use these particular versions, be sure to pull python-neutronclient prior to version 9.0.0, as it prevents you from accessing trunks.

Prerequisites
  • You installed the OKD CLI (oc).

  • You installed a Python interpreter.

  • You installed the openstacksdk Python package.

  • You installed the openstack CLI.

  • You have access to the underlying OpenStack cloud.

  • You can access the cluster as a user with the cluster-admin role.

Procedure
  1. Create a clean-up Python virtual environment:

    1. Create a temporary directory for your environment. For example:

      $ python3 -m venv /tmp/venv

      The virtual environment located in /tmp/venv directory is used in all clean up examples.

    2. Enter the virtual environment. For example:

      $ source /tmp/venv/bin/activate
    3. Upgrade the pip command in the virtual environment by running the following command:

      (venv) $ pip install --upgrade pip
    4. Install the required Python packages by running the following command:

      (venv) $ pip install openstacksdk==0.54.0 python-openstackclient==5.5.0 python-octaviaclient==2.3.0 'python-neutronclient<9.0.0'
  2. In your terminal, set variables to cluster and Kuryr identifiers by running the following commands:

    1. Set the cluster ID:

      (venv) $ CLUSTERID=$(oc get infrastructure.config.openshift.io cluster -o=jsonpath='{.status.infrastructureName}')
    2. Set the cluster tag:

      (venv) $ CLUSTERTAG="openshiftClusterID=${CLUSTERID}"
    3. Set the router ID:

      (venv) $ ROUTERID=$(oc get kuryrnetwork -A --no-headers -o custom-columns=":status.routerId"|uniq)
  3. Create a Bash function that removes finalizers from specified resources by running the following command:

    (venv) $ function REMFIN {
        local resource=$1
        local finalizer=$2
        for res in $(oc get "${resource}" -A --template='{{range $i,$p := .items}}{{ $p.metadata.name }}|{{ $p.metadata.namespace }}{{"\n"}}{{end}}'); do
            name=${res%%|*}
            ns=${res##*|}
            yaml=$(oc get -n "${ns}" "${resource}" "${name}" -o yaml)
            if echo "${yaml}" | grep -q "${finalizer}"; then
                echo "${yaml}" | grep -v  "${finalizer}" | oc replace -n "${ns}" "${resource}" "${name}" -f -
            fi
        done
    }

    The function takes two parameters: the first parameter is name of the resource, and the second parameter is the finalizer to remove. The named resource is removed from the cluster and its definition is replaced with copied data, excluding the specified finalizer.

  4. To remove Kuryr finalizers from services, enter the following command:

    (venv) $ REMFIN services kuryr.openstack.org/service-finalizer
  5. To remove the Kuryr service-subnet-gateway-ip service, enter the following command:

    (venv) $ if oc get -n openshift-kuryr service service-subnet-gateway-ip &>/dev/null; then
        oc -n openshift-kuryr delete service service-subnet-gateway-ip
    fi
  6. To remove all tagged OpenStack load balancers from Octavia, enter the following command:

    (venv) $ for lb in $(openstack loadbalancer list --tags "${CLUSTERTAG}" -f value -c id); do
        openstack loadbalancer delete --cascade "${lb}"
    done
  7. To remove Kuryr finalizers from all KuryrLoadBalancer CRs, enter the following command:

    (venv) $ REMFIN kuryrloadbalancers.openstack.org kuryr.openstack.org/kuryrloadbalancer-finalizers
  8. To remove the openshift-kuryr namespace, enter the following command:

    (venv) $ oc delete namespace openshift-kuryr
  9. To remove the Kuryr service subnet from the router, enter the following command:

    (venv) $ openstack router remove subnet "${ROUTERID}" "${CLUSTERID}-kuryr-service-subnet"
  10. To remove the Kuryr service network, enter the following command:

    (venv) $ openstack network delete "${CLUSTERID}-kuryr-service-network"
  11. To remove Kuryr finalizers from all pods, enter the following command:

    (venv) $ REMFIN pods kuryr.openstack.org/pod-finalizer
  12. To remove Kuryr finalizers from all KuryrPort CRs, enter the following command:

    (venv) $ REMFIN kuryrports.openstack.org kuryr.openstack.org/kuryrport-finalizer

    This command deletes the KuryrPort CRs.

  13. To remove Kuryr finalizers from network policies, enter the following command:

    (venv) $ REMFIN networkpolicy kuryr.openstack.org/networkpolicy-finalizer
  14. To remove Kuryr finalizers from remaining network policies, enter the following command:

    (venv) $ REMFIN kuryrnetworkpolicies.openstack.org kuryr.openstack.org/networkpolicy-finalizer
  15. To remove subports that Kuryr created from trunks, enter the following command:

    (venv) $ mapfile trunks < <(python -c "import openstack; n = openstack.connect().network; print('\n'.join([x.id for x in n.trunks(any_tags='$CLUSTERTAG')]))") && \
    i=0 && \
    for trunk in "${trunks[@]}"; do
        trunk=$(echo "$trunk"|tr -d '\n')
        i=$((i+1))
        echo "Processing trunk $trunk, ${i}/${#trunks[@]}."
        subports=()
        for subport in $(python -c "import openstack; n = openstack.connect().network; print(' '.join([x['port_id'] for x in n.get_trunk('$trunk').sub_ports if '$CLUSTERTAG' in n.get_port(x['port_id']).tags]))"); do
            subports+=("$subport");
        done
        args=()
        for sub in "${subports[@]}" ; do
            args+=("--subport $sub")
        done
        if [ ${#args[@]} -gt 0 ]; then
            openstack network trunk unset ${args[*]} "${trunk}"
        fi
    done
  16. To retrieve all networks and subnets from KuryrNetwork CRs and remove ports, router interfaces and the network itself, enter the following command:

    (venv) $ mapfile -t kuryrnetworks < <(oc get kuryrnetwork -A --template='{{range $i,$p := .items}}{{ $p.status.netId }}|{{ $p.status.subnetId }}{{"\n"}}{{end}}') && \
    i=0 && \
    for kn in "${kuryrnetworks[@]}"; do
        i=$((i+1))
        netID=${kn%%|*}
        subnetID=${kn##*|}
        echo "Processing network $netID, ${i}/${#kuryrnetworks[@]}"
        # Remove all ports from the network.
        for port in $(python -c "import openstack; n = openstack.connect().network; print(' '.join([x.id for x in n.ports(network_id='$netID') if x.device_owner != 'network:router_interface']))"); do
            ( openstack port delete "${port}" ) &
    
            # Only allow 20 jobs in parallel.
            if [[ $(jobs -r -p | wc -l) -ge 20 ]]; then
                wait -n
            fi
        done
        wait
    
        # Remove the subnet from the router.
        openstack router remove subnet "${ROUTERID}" "${subnetID}"
    
        # Remove the network.
        openstack network delete "${netID}"
    done
  17. To remove the Kuryr security group, enter the following command:

    (venv) $ openstack security group delete "${CLUSTERID}-kuryr-pods-security-group"
  18. To remove all tagged subnet pools, enter the following command:

    (venv) $ for subnetpool in $(openstack subnet pool list --tags "${CLUSTERTAG}" -f value -c ID); do
        openstack subnet pool delete "${subnetpool}"
    done
  19. To check that all of the networks based on KuryrNetwork CRs were removed, enter the following command:

    (venv) $ networks=$(oc get kuryrnetwork -A --no-headers -o custom-columns=":status.netId") && \
    for existingNet in $(openstack network list --tags "${CLUSTERTAG}" -f value -c ID); do
        if [[ $networks =~ $existingNet ]]; then
            echo "Network still exists: $existingNet"
        fi
    done

    If the command returns any existing networks, intestigate and remove them before you continue.

  20. To remove security groups that are related to network policy, enter the following command:

    (venv) $ for sgid in $(openstack security group list -f value -c ID -c Description | grep 'Kuryr-Kubernetes Network Policy' | cut -f 1 -d ' '); do
        openstack security group delete "${sgid}"
    done
  21. To remove finalizers from KuryrNetwork CRs, enter the following command:

    (venv) $ REMFIN kuryrnetworks.openstack.org kuryrnetwork.finalizers.kuryr.openstack.org
  22. To remove the Kuryr router, enter the following command:

    (venv) $ if python3 -c "import sys; import openstack; n = openstack.connect().network; r = n.get_router('$ROUTERID'); sys.exit(0) if r.description != 'Created By OpenShift Installer' else sys.exit(1)"; then
        openstack router delete "${ROUTERID}"
    fi