This is a cache of https://docs.okd.io/4.18/networking/multiple_networks/primary_networks/about-user-defined-networks.html. It is a snapshot of the page at 2026-01-13T22:35:38.825+0000.
UserDefinedNetwork CR - Multiple networks | Networking | OKD 4.18
×

Overview of user-defined networks

To secure and improve network segmentation and isolation, cluster administrators can create primary or secondary networks that span namespaces at the cluster level using the ClusterUserDefinedNetwork custom resource (CR) while a developer can define secondary networks at the namespace level using the UserDefinedNetwork CR.

Before the implementation of user-defined networks (UDN), the OVN-Kubernetes CNI plugin for OKD only supported a layer 3 topology on the primary or main network. Due to Kubernetes design principles: all pods are attached to the main network, all pods communicate with each other by their IP addresses, and inter-pod traffic is restricted according to network policy.

While the Kubernetes design is useful for simple deployments, this layer 3 topology restricts customization of primary network segment configurations, especially for modern multi-tenant deployments.

UDN improves the flexibility and segmentation capabilities of the default layer 3 topology for a Kubernetes pod network by enabling custom layer 2 and layer 3 network segments, where all these segments are isolated by default. These segments act as either primary or secondary networks for container pods and virtual machines that use the default OVN-Kubernetes CNI plugin. UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance.

The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a ClusterUserDefinedNetwork or UserDefinedNetwork CR, how to create the CR, and additional configuration details that might be relevant to your deployment.

Nodes that use cgroupv1 Linux Control Groups (cgroup) must be reconfigured from cgroupv1 to cgroupv2 before creating a user-defined network. For more information, see Configuring Linux cgroup.

Benefits of a user-defined network

User-defined networks enable tenant isolation by providing each namespace with its own isolated primary network, reducing cross-tenant traffic risks and simplifying network management by eliminating the need for complex network policies.

User-defined networks offer the following benefits:

  1. Enhanced network isolation for security

    • Tenant isolation: Namespaces can have their own isolated primary network, similar to how tenants are isolated in OpenStack. This improves security by reducing the risk of cross-tenant traffic.

  2. Network flexibility

    • Layer 2 and layer 3 support: Cluster administrators can configure primary networks as layer 2 or layer 3 network types.

  3. Simplified network management

    • Reduced network configuration complexity: With user-defined networks, the need for complex network policies are eliminated because isolation can be achieved by grouping workloads in different networks.

  4. Advanced capabilities

    • Consistent and selectable IP addressing: Users can specify and reuse IP subnets across different namespaces and clusters, providing a consistent networking environment.

    • Support for multiple networks: The user-defined networking feature allows administrators to connect multiple namespaces to a single network, or to create distinct networks for different sets of namespaces.

  5. Simplification of application migration from OpenStack

    • Network parity: With user-defined networking, the migration of applications from OpenStack to OKD is simplified by providing similar network isolation and configuration options.

Developers and administrators can create a user-defined network that is namespace scoped using the custom resource. An overview of the process is as follows:

  1. An administrator creates a namespace for a user-defined network with the k8s.ovn.org/primary-user-defined-network label.

  2. The UserDefinedNetwork CR is created by either the cluster administrator or the user.

  3. The user creates pods in the namespace.

Limitations of a user-defined network

To deploy a successful user-defined networks (UDN), you must consider their limitations including DNS resolution behavior, restricted access to default network services such as the image registry, network policy constraints between isolated networks, and the requirement to create namespaces and networks before pods.

Consider the following limitations before implementing a UDN.

  • DNS limitations:

    • DNS lookups for pods resolve to the pod’s IP address on the cluster default network. Even if a pod is part of a user-defined network, DNS lookups will not resolve to the pod’s IP address on that user-defined network. However, DNS lookups for services and external entities will function as expected.

    • When a pod is assigned to a primary UDN, it can access the Kubernetes API (KAPI) and DNS services on the cluster’s default network.

  • Initial network assignment: You must create the namespace and network before creating pods. Assigning a namespace with pods to a new network or creating a UDN in an existing namespace will not be accepted by OVN-Kubernetes.

  • Health check limitations: Kubelet health checks are performed by the cluster default network, which does not confirm the network connectivity of the primary interface on the pod. Consequently, scenarios where a pod appears healthy by the default network, but has broken connectivity on the primary interface, are possible with user-defined networks.

  • Network policy limitations: Network policies that enable traffic between namespaces connected to different user-defined primary networks are not effective. These traffic policies do not take effect because there is no connectivity between these isolated networks.

  • Creation and modification limitation: The ClusterUserDefinedNetwork CR and the UserDefinedNetwork CR cannot be modified after being created.

  • Default network service access: A user-defined network pod is isolated from the default network, which means that most default network services are inaccessible. For example, a user-defined network pod cannot currently access the OKD image registry. Because of this limitation, source-to-image builds do not work in a user-defined network namespace. Additionally, other functions do not work, including functions to create applications based on the source code in a Git repository, such as oc new-app <command>, and functions to create applications from an OKD template that use source-to-image builds. This limitation might also affect other openshift-*.svc services.

  • Connectivity limitation: NodePort services on user-defined networks are not guaranteed isolation. For example, NodePort traffic from a pod to a service on the same node is not accessible, whereas traffic from a pod on a different node succeeds.

  • Unclear error message for IP address exhaustion: When the subnet of a user-defined network runs out of available IP addresses, new pods fail to start. When this occurs, the following error is returned: Warning: Failed to create pod sandbox. This error message does not clearly specify that IP depletion is the cause. To confirm the issue, you can check the Events page in the pod’s namespace on the OKD web console, where an explicit message about subnet exhaustion is reported.

  • Layer2 egress IP limitations:

    • Egress IP does not work without a default gateway.

    • Egress IP does not work on Google Cloud.

    • Egress IP does not work with multiple gateways and instead will forward all traffic to a single gateway.

Layer 2 and layer 3 topologies

A layer 2 topology creates a distributed virtual switch across cluster nodes, this network topology provides a smooth live migration of virtual machine (VM) within the same subnet. A layer 3 topology creates unique segments per node with routing between them, this network topology effectively manages large broadcast domains.

In a flat layer 2 topology, virtual machines and pods connect to the virtual switch so that all these components can communicate with each other within the same subnet. This topology is useful for the live migration of VMs across nodes in the cluster. The following diagram shows a flat layer 2 topology with two nodes that use the virtual switch for live migration purposes:

A flat layer 2 topology with a virtual switch so that virtual machines in node-1 to node-2 can communicate with each other
Figure 1. A flat layer 2 topology that uses a virtual switch for component communication

If you decide not to specify a layer 2 subnet, then you must manually configure IP addresses for each pod in your cluster. When you do not specify a layer 2 subnet, port security is limited to preventing Media Access Control (MAC) spoofing only, and does not include IP spoofing. A layer 2 topology creates a single broadcast domain that can be challenging in large network environments, where the topology might cause a broadcast storm that can degrade network performance.

To access more configurable options for your network, you can integrate a layer 2 topology with a user-defined network (UDN). The following diagram shows two nodes that use a UDN with a layer 2 topology that includes pods that exist on each node. Each node includes two interfaces:

  • A node interface, which is a compute node that connects networking components to the node.

  • An Open vSwitch (OVS) bridge such as br-ex, which creates an layer 2 OVN switch so that pods can communicate with each other and share resources.

An external switch connects these two interfaces, while the gateway or router handles routing traffic between the external switch and the layer 2 OVN switch. VMs and pods in a node can use the UDN to communicate with each other. The layer 2 OVN switch handles node traffic over a UDN so that live migrate of a VM from one node to another is possible.

A UDN that uses a layer 2 topology for migrating a VM from node-1 to node-2
Figure 2. A user-defined network (UDN) that uses a layer 2 topology

A layer 3 topology creates a unique layer 2 segment for each node in a cluster. The layer 3 routing mechanism interconnects these segments so that virtual machines and pods that are hosted on different nodes can communicate with each other. A layer 3 topology can effectively manage large broadcast domains by assigning each domain to a specific node, so that broadcast traffic has a reduced scope. To configure a layer 3 topology, you must configure cidr and hostSubnet parameters.

About the ClusterUserDefinedNetwork CR

The ClusterUserDefinedNetwork (CUDN) custom resource (CR) provides cluster-scoped network segmentation in OKD and isolation for administrators only. Defining this resource ensures that network traffic is securely partitioned across the entire cluster.

The following diagram demonstrates how a cluster administrator can use the CUDN CR to create network isolation between tenants. This network configuration allows a network to span across many namespaces. In the diagram, network isolation is achieved through the creation of two user-defined networks, udn-1 and udn-2. These networks are not connected and the spec.namespaceSelector.matchLabels field is used to select different namespaces. For example, udn-1 configures and isolates communication for namespace-1 and namespace-2, while udn-2 configures and isolates communication for namespace-3 and namespace-4. Isolated tenants (Tenants 1 and Tenants 2) are created by separating namespaces while also allowing pods in the same namespace to communicate.

The tenant isolation concept in a user-defined network (UDN)
Figure 3. Tenant isolation using a ClusterUserDefinedNetwork CR

Best practices for ClusterUserDefinedNetwork CRs

To create and deploy a successful instance of the ClusterUserDefinedNetwork (CUDN) CR, administrators must follow best practices such as avoiding default and openshift-* namespaces, use the proper namespace selector configuration, and ensure physical network parameter matching.

The following details provide administrators with a best practice for designing a CUDN CR:

  • A ClusterUserDefinedNetwork CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network.

  • ClusterUserDefinedNetwork CRs should not select the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster.

  • ClusterUserDefinedNetwork CRs should not select openshift-* namespaces.

  • OKD administrators should be aware that all namespaces of a cluster are selected when one of the following conditions are met:

    • The matchLabels selector is left empty.

    • The matchExpressions selector is left empty.

    • The namespaceSelector is initialized, but does not specify matchExpressions or matchLabel. For example: namespaceSelector: {}.

  • For primary networks, the namespace used for the ClusterUserDefinedNetwork CR must include the k8s.ovn.org/primary-user-defined-network label. This label cannot be updated, and can only be added when the namespace is created. The following conditions apply with the k8s.ovn.org/primary-user-defined-network namespace label:

    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a pod is created, the pod attaches itself to the default network.

    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary ClusterUserDefinedNetwork CR is created that matches the namespace, an error is reported and the network is not created.

    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary ClusterUserDefinedNetwork CR already exists, a pod in the namespace is created and attached to the default network.

    • If the namespace has the label, and a primary ClusterUserDefinedNetwork CR does not exist, a pod in the namespace is not created until the ClusterUserDefinedNetwork CR is created.

Creating a ClusterUserDefinedNetwork CR by using the CLI

To implement cluster-wide network segmentation and isolation across multiple namespaces, supporting either layer 2 or layer 3 in OKD, create a ClusterUserDefinedNetwork CR by using the CLI. Defining this resource ensures that network traffic is securely partitioned across the cluster.

Based upon your use case, create your request by using either the cluster-layer-two-udn.yaml example for a Layer2 topology type or the cluster-layer-three-udn.yaml example for a Layer3 topology type.

  • The ClusterUserDefinedNetwork CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network.

  • OKD Virtualization only supports the Layer2 topology.

Prerequisites
  • You have logged in as a user with cluster-admin privileges.

Procedure
  1. Optional: For a ClusterUserDefinedNetwork CR that uses a primary network, create a namespace with the k8s.ovn.org/primary-user-defined-network label by entering the following command:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: <cudn_namespace_name>
      labels:
        k8s.ovn.org/primary-user-defined-network: ""
    EOF
  2. Create a cluster-wide user-defined network for either a Layer2 or Layer3 topology type:

    1. Create a YAML file, such as cluster-layer-two-udn.yaml, to define your request for a Layer2 topology as in the following example:

      apiVersion: k8s.ovn.org/v1
      kind: ClusterUserDefinedNetwork
      metadata:
        name: <cudn_name>
      spec:
        namespaceSelector:
          matchLabels:
            "<label_1_key>": "<label_1_value>"
            "<label_2_key>": "<label_2_value>"
        network:
          topology: Layer2
          layer2:
            role: Primary
            subnets:
              - "2001:db8::/64"
              - "10.100.0.0/16"

      where:

      Name

      Specifies the name of your ClusterUserDefinedNetwork CR.

      namespaceSelector

      Specifies a label query over the set of namespaces that the CUDN CR applies to. Uses the standard Kubernetes MatchLabel selector. Must not point to default or openshift-* namespaces.

      matchLabels

      Uses the matchLabels selector type, where terms are evaluated with an AND relationship. In this example, the CUDN CR is deployed to namespaces that contain both <label_1_key>=<label_1_value> and <label_2_key>=<label_2_value> labels.

      network

      Describes the network configuration.

      topology

      This field describes the network configuration; accepted values are Layer2 and Layer3. Specifying a Layer2 topology type creates one logical switch that is shared by all nodes. This field specifies the topology configuration. It can be layer2 or layer3.

      role

      Specifies Primary or Secondary. Primary is the only role specification supported in 4.18.

      subnets

      For Layer2 topology types the following specifies config details for the field:

      • The subnets field is optional.

      • The subnets field is of type string and accepts standard CIDR formats for both IPv4 and IPv6.

      • The subnets field accepts one or two items. For two items, they must be of a different family. For example, subnets values of 10.100.0.0/16 and 2001:db8::/64.

      • Layer2 subnets can be omitted. If omitted, users must configure static IP addresses for the pods. As a consequence, port security only prevents MAC spoofing. For more information, see "Configuring pods with a static IP address".

    2. Create a YAML file, such as cluster-layer-three-udn.yaml, to define your request for a Layer3 topology as in the following example:

      apiVersion: k8s.ovn.org/v1
      kind: ClusterUserDefinedNetwork
      metadata:
        name: <cudn_name>
      spec:
        namespaceSelector:
          matchExpressions:
          - key: kubernetes.io/metadata.name
            operator: In
            values: ["<example_namespace_one>", "<example_namespace_two>"]
        network:
          topology: Layer3
          layer3:
            role: Primary
            subnets:
              - cidr: 10.100.0.0/16
                hostSubnet: 24

      where:

      Name

      Specifies the name of your ClusterUserDefinedNetwork CR.

      namespaceSelector

      Specifies a label query over the set of namespaces that the CUDN CR applies to. Uses the standard Kubernetes MatchLabel selector. Must not point to default or openshift-* namespaces. Uses the matchExpressions selector type, where terms are evaluated with an OR relationship.

      Key

      Specifies the label key to match. Takes an operator value; valid values include: In, NotIn, Exists, and DoesNotExist. Because the matchExpressions type is used, provisions namespaces matching either <example_namespace_one> or <example_namespace_two>.

      network

      Describes the network configuration.

      topology

      The topology field describes the network configuration; accepted values are Layer2 and Layer3. Specifying a Layer3 topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets.

      role

      Specifies Primary or Secondary. Primary is the only role specification supported in 4.18.

      subnets

      For Layer3 topology types the following specifies config details for the subnet field:

      • The subnets field is mandatory.

      • The type for the subnets field is cidr and hostSubnet:

        • cidr is the cluster subnet and accepts a string value.

        • hostSubnet specifies the nodes subnet prefix that the cluster subnet is split to.

        • For IPv6, only a /64 length is supported for hostSubnet.

  3. Apply your request by running the following command:

    $ oc create --validate=true -f <example_cluster_udn>.yaml

    Where <example_cluster_udn>.yaml is the name of your Layer2 or Layer3 configuration file.

  4. Verify that your request is successful by running the following command:

    $ oc get clusteruserdefinednetwork <cudn_name> -o yaml

    Where <cudn_name> is the name you created of your cluster-wide user-defined network.

    Example output
    apiVersion: k8s.ovn.org/v1
    kind: ClusterUserDefinedNetwork
    metadata:
      creationTimestamp: "2024-12-05T15:53:00Z"
      finalizers:
      - k8s.ovn.org/user-defined-network-protection
      generation: 1
      name: my-cudn
      resourceVersion: "47985"
      uid: 16ee0fcf-74d1-4826-a6b7-25c737c1a634
    spec:
      namespaceSelector:
        matchExpressions:
        - key: custom.network.selector
          operator: In
          values:
          - example-namespace-1
          - example-namespace-2
          - example-namespace-3
      network:
        layer3:
          role: Primary
          subnets:
          - cidr: 10.100.0.0/16
        topology: Layer3
    status:
      conditions:
      - lastTransitionTime: "2024-11-19T16:46:34Z"
        message: 'NetworkAttachmentDefinition has been created in following namespaces:
          [example-namespace-1, example-namespace-2, example-namespace-3]'
        reason: NetworkAttachmentDefinitionReady
        status: "True"
        type: NetworkCreated

Creating a ClusterUserDefinedNetwork CR by using the web console

To implement isolated network segments with layer 2 connectivity in OKD, create a ClusterUserDefinedNetwork custom resource (CR) by using the web console. Defining this resource ensures that your cluster workloads can communicate directly at the data link layer.

Currently, creation of a ClusterUserDefinedNetwork CR with a Layer3 topology is not supported when using the OKD web console.

Prerequisites
  • You have access to the OKD web console as a user with cluster-admin permissions.

  • You have created a namespace and applied the k8s.ovn.org/primary-user-defined-network label.

Procedure
  1. From the Administrator perspective, click NetworkingUserDefinedNetworks.

  2. Click ClusterUserDefinedNetwork.

  3. In the Name field, specify a name for the cluster-scoped UDN.

  4. Specify a value in the Subnet field.

  5. In the Project(s) Match Labels field, add the appropriate labels to select namespaces that the cluster UDN applies to.

  6. Click Create. The cluster-scoped UDN serves as the default primary network for pods located in namespaces that contain the labels that you specified in step 5.

About the UserDefinedNetwork CR

To create advanced network segmentation and isolation, users and administrators create UserDefinedNetwork (UDN) custom resource (CR)s. UDNs provide granular control over network traffic within specific namespaces.

The following diagram shows four cluster namespaces, where each namespace has a single assigned user-defined network (UDN), and each UDN has an assigned custom subnet for its pod IP allocations. The OVN-Kubernetes handles any overlapping UDN subnets. Without using the Kubernetes network policy, a pod attached to a UDN can communicate with other pods in that UDN. By default, these pods are isolated from communicating with pods that exist in other UDNs. For microsegmentation, you can apply network policy within a UDN. You can assign one or more UDNs to a namespace, with a limitation of only one primary UDN to a namespace, and one or more namespaces to a UDN.

The namespace isolation concept in a user-defined network (UDN)
Figure 4. Namespace isolation using a UserDefinedNetwork CR

Best practices for UserDefinedNetwork CRs

To deploy a successful instance of the UserDefinedNetwork (UDN) CR, you must follow masquerade IP address requirements, avoid default and openshift-* namespaces, set a proper namespace selector configuration, and ensure physical network parameter matching.

The following details provide a best practice for designing a UDN CR:

  • openshift-* namespaces should not be used to set up a UserDefinedNetwork CR.

  • UserDefinedNetwork CRs should not be created in the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster.

  • For primary networks, the namespace used for the UserDefinedNetwork CR must include the k8s.ovn.org/primary-user-defined-network label. This label cannot be updated, and can only be added when the namespace is created. The following conditions apply with the k8s.ovn.org/primary-user-defined-network namespace label:

    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a pod is created, the pod attaches itself to the default network.

    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary UserDefinedNetwork CR is created that matches the namespace, a status error is reported and the network is not created.

    • If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary UserDefinedNetwork CR already exists, a pod in the namespace is created and attached to the default network.

    • If the namespace has the label, and a primary UserDefinedNetwork CR does not exist, a pod in the namespace is not created until the UserDefinedNetwork CR is created.

  • 2 masquerade IP addresses are required for user defined networks. You must reconfigure your masquerade subnet to be large enough to hold the required number of networks.

    • For OKD 4.17 and later, clusters use 169.254.0.0/17 for IPv4 and fd69::/112 for IPv6 as the default masquerade subnet. These ranges should be avoided by users. For updated clusters, there is no change to the default masquerade subnet.

    • Changing the cluster’s masquerade subnet is unsupported after a user-defined network has been configured for a project. Attempting to modify the masquerade subnet after a UserDefinedNetwork CR has been set up can disrupt the network connectivity and cause configuration issues.

  • Ensure tenants are using the UserDefinedNetwork resource and not the NetworkAttachmentDefinition (NAD) CR. This can create security risks between tenants.

  • When creating network segmentation, you should only use the NetworkAttachmentDefinition CR if user-defined network segmentation cannot be completed using the UserDefinedNetwork CR.

  • The cluster subnet and services CIDR for a UserDefinedNetwork CR cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses 100.64.0.0/16 as the default join subnet for the network. You must not use that value to configure a UserDefinedNetwork CR’s joinSubnets field. If the default address values are used anywhere in the network for the cluster you must override the default values by setting the joinSubnets field. For more information, see "Additional configuration details for user-defined networks".

Creating a UserDefinedNetwork CR by using the CLI

Create a UserDefinedNetwork CR by using the CLI to enable namespace-scoped network segmentation and isolation, allowing you to define custom Layer 2 or Layer 3 network topologies for pods within specific namespaces.

The following procedure creates a UserDefinedNetwork CR that is namespace scoped. Based upon your use case, create your request by using either the my-layer-two-udn.yaml example for a Layer2 topology type or the my-layer-three-udn.yaml example for a Layer3 topology type.

Prerequisites
  • You have logged in with cluster-admin privileges, or you have view and edit role-based access control (RBAC).

Procedure
  1. Optional: For a UserDefinedNetwork CR that uses a primary network, create a namespace with the k8s.ovn.org/primary-user-defined-network label by entering the following command:

    $ cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: <udn_namespace_name>
      labels:
        k8s.ovn.org/primary-user-defined-network: ""
    EOF
  2. Create a user-defined network for either a Layer2 or Layer3 topology type:

    1. Create a YAML file, such as my-layer-two-udn.yaml, to define your request for a Layer2 topology as in the following example:

      apiVersion: k8s.ovn.org/v1
      kind: UserDefinedNetwork
      metadata:
        name: udn-1
        namespace: <some_custom_namespace>
      spec:
        topology: Layer2
        layer2: (3)
          role: Primary
          subnets:
            - "10.0.0.0/24"
            - "2001:db8::/60"

      where:

      name

      Name of your UserDefinedNetwork resource. This should not be default or duplicate any global namespaces created by the Cluster Network Operator (CNO).

      topology

      Specifies the network configuration; accepted values are Layer2 and Layer3. Specifying a Layer2 topology type creates one logical switch that is shared by all nodes.

      role

      Specifies a Primary or Secondary role.

      subnets

      For Layer2 topology types the following specifies config details for the subnet field:

      • The subnets field is optional.

      • The subnets field is of type string and accepts standard CIDR formats for both IPv4 and IPv6.

      • The subnets field accepts one or two items. For two items, they must be of a different family. For example, subnets values of 10.100.0.0/16 and 2001:db8::/64.

      • Layer2 subnets can be omitted. If omitted, users must configure IP addresses for the pods. As a consequence, port security only prevents MAC spoofing.

      • The Layer2 subnets field is mandatory when the ipamLifecycle field is specified.

    2. Create a YAML file, such as my-layer-three-udn.yaml, to define your request for a Layer3 topology as in the following example:

      apiVersion: k8s.ovn.org/v1
      kind: UserDefinedNetwork
      metadata:
        name: udn-2-primary
        namespace: <some_custom_namespace>
      spec:
        topology: Layer3
        layer3:
          role: Primary
          subnets:
            - cidr: 10.150.0.0/16
              hostSubnet: 24
            - cidr: 2001:db8::/60
              hostSubnet: 64
      # ...

      where:

      name

      Name of your UserDefinedNetwork resource. This should not be default or duplicate any global namespaces created by the Cluster Network Operator (CNO).

      topology

      Specifies the network configuration; accepted values are Layer2 and Layer3. Specifying a Layer2 topology type creates one logical switch that is shared by all nodes.

      role

      Specifies a Primary or Secondary role.

      subnets

      For Layer3 topology types the following specifies config details for the subnet field:

      • The subnets field is mandatory.

      • The type for the subnets field is cidr and hostSubnet:

        • cidr is equivalent to the clusterNetwork configuration settings of a cluster. The IP addresses in the CIDR are distributed to pods in the user defined network. This parameter accepts a string value.

        • hostSubnet defines the per-node subnet prefix.

        • For IPv6, only a /64 length is supported for hostSubnet.

  3. Apply your request by running the following command:

    $ oc apply -f <my_layer_two_udn>.yaml

    Where <my_layer_two_udn>.yaml is the name of your Layer2 or Layer3 configuration file.

  4. Verify that your request is successful by running the following command:

    $ oc get userdefinednetworks udn-1 -n <some_custom_namespace> -o yaml

    Where some_custom_namespace is the namespace you created for your user-defined network.

    Example output
    apiVersion: k8s.ovn.org/v1
    kind: UserDefinedNetwork
    metadata:
      creationTimestamp: "2024-08-28T17:18:47Z"
      finalizers:
      - k8s.ovn.org/user-defined-network-protection
      generation: 1
      name: udn-1
      namespace: some-custom-namespace
      resourceVersion: "53313"
      uid: f483626d-6846-48a1-b88e-6bbeb8bcde8c
    spec:
      layer2:
        role: Primary
        subnets:
        - 10.0.0.0/24
        - 2001:db8::/60
      topology: Layer2
    status:
      conditions:
      - lastTransitionTime: "2024-08-28T17:18:47Z"
        message: NetworkAttachmentDefinition has been created
        reason: NetworkAttachmentDefinitionReady
        status: "True"
        type: NetworkCreated
Additional resources

Creating a UserDefinedNetwork CR by using the web console

To implement isolated network segments with layer 2 connectivity in OKD, create a UserDefinedNetwork custom resource (CR) by using the web console. Defining this resource ensures that your cluster workloads can communicate directly at the data link layer.

Currently, creation of a UserDefinedNetwork CR with a Layer3 topology or a Secondary role are not supported when using the OKD web console.

Prerequisites
  • You have access to the OKD web console as a user with cluster-admin permissions.

  • You have created a namespace and applied the k8s.ovn.org/primary-user-defined-network label.

Procedure
  1. From the Administrator perspective, click NetworkingUserDefinedNetworks.

  2. Click Create UserDefinedNetwork.

  3. From the Project name list, select the namespace that you previously created.

  4. Specify a value in the Subnet field.

  5. Click Create. The user-defined network serves as the default primary network for pods that you create in this namespace.

Additional configuration details for user-defined networks

Configure optional advanced settings for ClusterUserDefinedNetwork and UserDefinedNetwork CRs when default values conflict with your network topology or when you need persistent IP addresses, custom gateways, or specific subnet configurations.

It is not recommended to set these fields without explicit need and understanding of OVN-Kubernetes network topology.

  1. Optional configurations for user-defined networks

CUDN field

UDN field

Type

Description

spec.network.<topology>.joinSubnets

spec.<topology>.joinSubnets

object

When omitted, the platform sets default values for the joinSubnets field of 100.65.0.0/16 for IPv4 and fd99::/64 for IPv6. If the default address values are used anywhere in the cluster’s network you must override it by setting the joinSubnets field. If you choose to set this field, ensure it does not conflict with other subnets in the cluster such as the cluster subnet, the default network cluster subnet, and the masquerade subnet.

The joinSubnets field configures the routing between different segments within a user-defined network. Dual-stack clusters can set 2 subnets, one for each IP family; otherwise, only 1 subnet is allowed. This field is only allowed for the Primary network.

spec.network.<topology>.ipam.lifecycle

spec.<topology>.ipam.lifecycle

object

The spec.ipam.lifecycle field configures the IP address management system (IPAM). You might use this field for virtual workloads to ensure persistent IP addresses. The only allowed value is Persistent, which ensures that your virtual workloads have persistent IP addresses across reboots and migration. These are assigned by the container network interface (CNI) and used by OVN-Kubernetes to program pod IP addresses. You must not change this for pod annotations.

Setting a value of Persistent is only supported when ipam.mode parameter is set to Enabled.

spec.network.<topology>.ipam.mode

spec.network.<topology>.ipam.mode

object

The mode parameter controls how much of the IP configuration is managed by OVN-Kubernetes. The following options are available: * Enabled: When enabled, OVN-Kubernetes applies the IP configuration to the SDN infrastructure and assigns IP addresses from the selected subnet to the individual pods. This is the default setting. When set to Enabled, the subnets field must be defined. Enabled is the default configuration. * Disabled: When disabled, OVN-Kubernetes only assigns MAC addresses and provides layer 2 communication, which allows users to configure IP addresses. Disabled is only available for layer 2 (secondary) networks. By disabling IPAM, features that rely on selecting pods by IP, for example, network policy, services, and so on, no longer function. Additionally, IP port security is also disabled for interfaces attached to this network. The subnets field must be empty when spec.ipam.mode is set to Disabled.

spec.network.<topology>.mtu

spec.<topology>.mtu

integer

The maximum transmission units (MTU). The default value is 1400. The boundary for IPv4 is 576, and for IPv6 it is 1280.

where:

<topology>

Is one of layer2 or layer3.

User-defined network status condition types

To troubleshoot your network deployment in OKD, evaluate the status condition types returned for ClusterUserDefinedNetwork and UserDefinedNetwork custom resources (CRs). Reviewing these conditions ensures that you can identify and resolve configuration errors.

Table 1. NetworkCreated condition types (ClusterDefinedNetwork and UserDefinedNetwork CRs)
Condition type Status Reason and Message

NetworkCreated

True

When True, the following reason and message is returned:

Reason

Message

NetworkAttachmentDefinitionCreated

'NetworkAttachmentDefinition has been created in following namespaces: [example-namespace-1, example-namespace-2, example-namespace-3]'`

NetworkCreated

False

When False, one of the following messages is returned:

Reason

Message

SyncError

failed to generate NetworkAttachmentDefinition

SyncError

failed to update NetworkAttachmentDefinition

SyncError

primary network already exist in namespace "<namespace_name>": "<primary_network_name>"

SyncError

failed to create NetworkAttachmentDefinition: create NAD error

SyncError

foreign NetworkAttachmentDefinition with the desired name already exist

SyncError

failed to add finalizer to UserDefinedNetwork

NetworkAttachmentDefinitionDeleted

NetworkAttachmentDefinition is being deleted: [<namespace>/<nad_name>]

Table 2. NetworkAllocationSucceeded condition types (UserDefinedNetwork CRs)
Condition type Status Reason and Message

NetworkAllocationSucceeded

True

When True, the following reason and message is returned:

Reason

Message

NetworkAllocationSucceeded

Network allocation succeeded for all synced nodes.

NetworkAllocationSucceeded

False

When False, the following message is returned:

Reason

Message

InternalError

Network allocation failed for at least one node: [<node_name>], check UDN events for more info.

Opening default network ports on user-defined network pods

To allow default network pods to connect to a user-defined network pod, you can use the k8s.ovn.org/open-default-ports annotation. This annotation opens specific ports on the user-defined network pod for access from the default network.

By default, pods on a user-defined network (UDN) are isolated from the default network. This means that default network pods, such as those running monitoring services (prometheus or Alertmanager) or the OKD image registry, cannot initiate connections to UDN pods.

The following pod specification allows incoming TCP connections on port 80 and UDP traffic on port 53 from the default network:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.ovn.org/open-default-ports: |
      - protocol: tcp
        port: 80
      - protocol: udp
        port: 53
# ...

Open ports are accessible on the pod’s default network IP, not its UDN network IP.