This is a cache of https://docs.okd.io/4.17/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.html. It is a snapshot of the page at 2025-08-17T00:16:29.626+0000.
About the OVN-Kubernetes network plugin - OVN-Kubernetes network plugin | Networking | OKD 4.17
×

The OKD cluster uses a virtualized network for pod and service networks.

Part of Red Hat OpenShift Networking, the OVN-Kubernetes network plugin is the default network provider for OKD. OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.

OVN-Kubernetes is the default networking solution for OKD and single-node OpenShift deployments.

OVN-Kubernetes, which arose from the OVS project, uses many of the same constructs, such as open flow rules, to decide how packets travel through the network. For more information, see the Open Virtual Network website.

OVN-Kubernetes is a series of daemons for OVS that transform virtual network configurations into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers, providing a means for remotely controlling the flow of network traffic on a network device. This means that network administrators can configure, manage, and watch the flow of network traffic.

OVN-Kubernetes provides more of the advanced functionality not available with OpenFlow. OVN supports distributed virtual routing, distributed logical switches, access control, Dynamic Host Configuration Protocol (DHCP), and DNS. OVN implements distributed virtual routing within logic flows that equate to open flows. For example, if you have a pod that sends out a DHCP request to the DHCP server on the network, a logic flow rule in the request helps the OVN-Kubernetes handle the packet. This means that the server can respond with gateway, DNS server, IP address, and other information.

OVN-Kubernetes runs a daemon on each node. There are daemon sets for the databases and for the OVN controller that run on every node. The OVN controller programs the Open vSwitch daemon on the nodes to support the following network provider features:

  • Egress IPs

  • Firewalls

  • Hardware offloading

  • Hybrid networking

  • Internet Protocol Security (IPsec) encryption

  • IPv6

  • Multicast.

  • Network policy and network policy logs

  • Routers

OVN-Kubernetes purpose

The OVN-Kubernetes network plugin is an open-source, fully-featured Kubernetes CNI plugin that uses Open Virtual Network (OVN) to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution. The OVN-Kubernetes network plugin uses the following technologies:

  • OVN to manage network traffic flows.

  • Kubernetes network policy support and logs, including ingress and egress rules.

  • The Generic Network Virtualization Encapsulation (Geneve) protocol, rather than Virtual Extensible LAN (VXLAN), to create an overlay network between nodes.

The OVN-Kubernetes network plugin supports the following capabilities:

  • Hybrid clusters that can run both Linux and Microsoft Windows workloads. This environment is known as hybrid networking.

  • Offloading of network data processing from the host central processing unit (CPU) to compatible network cards and data processing units (DPUs). This is known as hardware offloading.

  • IPv4-primary dual-stack networking on bare-metal, VMware vSphere, IBM Power®, IBM Z®, and OpenStack platforms.

  • IPv6 single-stack networking on a bare-metal platform.

  • IPv6-primary dual-stack networking for a cluster running on a bare-metal, a VMware vSphere, or an OpenStack platform.

  • Egress firewall devices and egress IP addresses.

  • Egress router devices that operate in redirect mode.

  • IPsec encryption of intracluster communications.

Red Hat does not support the following postinstallation configurations that use the OVN-Kubernetes network plugin:

  • Configuring the primary network interface, including using the NMState Operator to configure bonding for the interface.

  • Configuring a sub-interface or additional network interface on a network device that uses the Open vSwitch (OVS) or an OVN-Kubernetes br-ex bridge network.

  • Creating additional virtual local area networks (VLANs) on the primary network interface.

  • Using the primary network interface, such as eth0 or bond0, that you created for a node during cluster installation to create additional secondary networks.

Red Hat does support the following postinstallation configurations that use the OVN-Kubernetes network plugin:

  • Creating additional VLANs from the base physical interface, such as eth0.100, where you configured the primary network interface as a VLAN for a node during cluster installation. This works because the Open vSwitch (OVS) bridge attaches to the initial VLAN sub-interface, such as eth0.100, leaving the base physical interface available for new configurations.

  • Creating an additional OVN secondary network with a localnet topology network requires that you define the secondary network in a NodeNetworkConfigurationPolicy (NNCP) object. After you create the network, pods or virtual machines (VMs) can then attach to the network. These secondary networks give a dedicated connection to the physical network, which might or might not use VLAN tagging. You cannot access these networks from the host network of a node where the host does not have the required setup, such as the required network settings.

OVN-Kubernetes IPv6 and dual-stack limitations

The OVN-Kubernetes network plugin has the following limitations:

  • For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway.

    If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state.

    If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml, the status field has more than one message about the default gateway, as shown in the following output:

    I1006 16:09:50.985852   60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1
    I1006 16:09:50.985923   60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4
    F1006 16:09:50.985939   60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4

    The only resolution is to reconfigure the host networking so that both IP families use the same network interface for the default gateway.

  • For clusters configured for dual-stack networking, both the IPv4 and IPv6 routing tables must contain the default gateway.

    If this requirement is not met, pods on the host in the ovnkube-node daemon set enter the CrashLoopBackOff state.

    If you display a pod with a command such as oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-node -o yaml, the status field has more than one message about the default gateway, as shown in the following output:

    I0512 19:07:17.589083  108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1
    F0512 19:07:17.589141  108432 ovnkube.go:133] failed to get default gateway interface

    The only resolution is to reconfigure the host networking so that both IP families contain the default gateway.

  • If you set the ipv6.disable parameter to 1 in the kernelArgument section of the MachineConfig custom resource (CR) for your cluster, OVN-Kubernetes pods enter a CrashLoopBackOff state. Additionally, updating your cluster to a later version of OKD fails because the Network Operator remains on a Degraded state. Red Hat does not support disabling IPv6 adddresses for your cluster so do not set the ipv6.disable parameter to 1.

Session affinity

Session affinity is a feature that applies to Kubernetes service objects. You can use session affinity if you want to ensure that each time you connect to a <service_VIP>:<Port>, the traffic is always load balanced to the same back end. For more information, including how to set session affinity based on a client’s IP address, see Session affinity.

Stickiness timeout for session affinity

The OVN-Kubernetes network plugin for OKD calculates the stickiness timeout for a session from a client based on the last packet. For example, if you run a curl command 10 times, the sticky session timer starts from the tenth packet not the first. As a result, if the client is continuously contacting the service, then the session never times out. The timeout starts when the service has not received a packet for the amount of time set by the timeoutSeconds parameter.