This is a cache of https://docs.openshift.com/container-platform/3.5/admin_guide/managing_networking.html. It is a snapshot of the page at 2024-11-23T03:53:42.408+0000.
Managing Networking | Cluster Administration | OpenShift Container Platform 3.5
×

Overview

This topic describes the management of the overall cluster network, including project isolation and outbound traffic control.

Pod-level networking features, such as per-pod bandwidth limits, are discussed in Managing Pods.

Managing Pod Networks

When your cluster is configured to use the ovs-multitenant SDN plug-in, you can manage the separate pod overlay networks for projects using the administrator CLI. See the Configuring the SDN section for plug-in configuration steps, if necessary.

Joining Project Networks

To join projects to an existing project network:

$ oc adm pod-network join-projects --to=<project1> <project2> <project3>

In the above example, all the pods and services in <project2> and <project3> can now access any pods and services in <project1> and vice versa. Services can be accessed either by IP or fully-qualified DNS name (<service>.<pod_namespace>.svc.cluster.local). For example, to access a service named db in a project myproject, use db.myproject.svc.cluster.local.

Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option.

Isolating Project Networks

To isolate the project network in the cluster and vice versa, run:

$ oc adm pod-network isolate-projects <project1> <project2>

In the above example, all of the pods and services in <project1> and <project2> can not access any pods and services from other non-global projects in the cluster and vice versa.

Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option.

Making Project Networks Global

To allow projects to access all pods and services in the cluster and vice versa:

$ oc adm pod-network make-projects-global <project1> <project2>

In the above example, all the pods and services in <project1> and <project2> can now access any pods and services in the cluster and vice versa.

Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option.

Controlling Egress Traffic

As a cluster administrator you can allocate a number of static IP addresses to a specific node at the host level. If an application developer needs a dedicated IP address for their application service, they can request one during the process they use to ask for firewall access. They can then deploy an egress router from the developer’s project, using a nodeSelector in the deployment configuration to ensure that the pod lands on the host with the pre-allocated static IP address.

The egress pod’s deployment declares one of the source IPs, the destination IP of the protected service, and a gateway IP to reach the destination. After the pod is deployed, you can create a service to access the egress router pod, then add that source IP to the corporate firewall. The developer then has access information to the egress router service that was created in their project, for example, service.project.cluster.domainname.com.

When the developer needs to access the external, firewalled service, they can call out to the egress router pod’s service (service.project.cluster.domainname.com) in their application (for example, the JDBC connection information) rather than the actual protected service URL.

As an OpenShift Container Platform cluster administrator, you can control egress traffic in three ways:

Firewall

Using an egress firewall allows you to enforce the acceptable outbound traffic policies, so that specific endpoints or IP ranges (subnets) are the only acceptable targets for the dynamic endpoints (pods within OpenShift Container Platform) to talk to.

router

Using an egress router allows you to create identifiable services to send traffic to a specific destination, ensuring an external destination treats traffic as though it were coming from a known source. This helps with security, because it allows you to secure an external database so that only specific pods in a namespace can talk to a service (the egress router), which proxies the traffic to your database.

Using an Egress Firewall to Limit Access to External Resources

As an OpenShift Container Platform cluster administrator, you can use egress firewall policy to limit the external addresses that some or all pods can access from within the cluster, so that:

  • A pod can only talk to internal hosts, and cannot initiate connections to the public Internet.

    Or,

  • A pod can only talk to the public Internet, and cannot initiate connections to internal hosts (outside the cluster).

    Or,

  • A pod cannot reach specified internal subnets/hosts that it should have no reason to contact.

You can configure projects to have different egress policies. For example, allowing <project A> access to a specified IP range, but denying the same access to <project B>. Or restrict application developers from updating from (Python) pip mirrors, and forcing updates to only come from desired sources.

You must have the ovs-multitenant plug-in enabled in order to limit pod access via egress policy.

Project administrators can neither create EgressNetworkPolicy objects, nor edit the ones you create in their project. There are also several other restrictions on where EgressNetworkPolicy can be created:

  • The default project (and any other project that has been made global via oc adm pod-network make-projects-global) cannot have egress policy.

  • If you merge two projects together using oc adm pod-network join-projects, then you cannot use egress policy in any of the joined projects.

  • No project can have more than one egress policy object.

Violating any of these restrictions results in broken egress policy for the project, and can cause all external network traffic to be dropped.

Use the oc command or the REST API to configure egress policy. You can use oc [create|replace|delete] to manipulate EgressNetworkPolicy objects. The api/swagger-spec/oapi-v1.json file has API-level details on how the objects actually work.

To configure egress policy:

  1. Navigate to the project you want to affect.

  2. Create a JSON file with the desired policy details. For example:

    {
        "kind": "EgressNetworkPolicy",
        "apiVersion": "v1",
        "metadata": {
            "name": "default"
        },
        "spec": {
            "egress": [
                {
                    "type": "Allow",
                    "to": {
                        "cidrSelector": "1.2.3.0/24"
                    }
                },
                {
                    "type": "Deny",
                    "to": {
                        "cidrSelector": "0.0.0.0/0"
                    }
                }
            ]
        }
    }

    When the example above is added in a project, it allows traffic to IP range 1.2.3.0/24, but denies access to all other external IP addresses. (Traffic to other pods is not affected because the policy only applies to external traffic.)

    The rules in an EgressNetworkPolicy are checked in order, and the first one that matches takes effect. If the two rules in the above example were reversed, then traffic would not be allowed to 1.2.3.0/24 because the 0.0.0.0/0 rule would be checked first, and it would match and deny all traffic.

    Domain name updates and any value changes are reflected within 30 minutes.

  3. Use the JSON file to create an EgressNetworkPolicy object:

    # oc create -f <policy>.json

Using an Egress router to Allow External Resources to Recognize Pod Traffic

The OpenShift Container Platform egress router runs a service that redirects traffic to a specified remote server, using a private source IP address that is not used for anything else. The service allows pods to talk to servers that are set up to only allow access from whitelisted IP addresses.

The egress router is not intended for every outgoing connection. Creating large numbers of egress routers can push the limits of your network hardware. For example, creating an egress router for every project or application could exceed the number of local MAC addresses that the network interface can handle before falling back to filtering MAC addresses in software.

Currently, the egress router is not compatible with Amazon AWS due to AWS not being compatible with macvlan traffic.

Important Deployment Considerations

The Egress router adds a second IP address and MAC address to the node’s primary network interface. If you are not running OpenShift Container Platform on bare metal, you may need to configure your hypervisor or cloud provider to allow the additional address.

Red Hat OpenStack Platform

If you are deploying OpenShift Container Platform on Red Hat OpenStack Platform, you need to whitelist the IP and MAC addresses on your Openstack environment, otherwise communication will fail:

neutron port-update $neutron_port_uuid \
  --allowed_address_pairs list=true \
  type=dict mac_address=<mac_address>,ip_address=<ip_address>

Red Hat Enterprise Virtualization

If you are using Red Hat Enterprise Virtualization, you should set EnableMACAntiSpoofingFilterRules to false.

VMware vSphere

If you are using VMware vSphere, see the VMWare documentation for securing vSphere standard switches. View and change VMWare vSphere default settings by selecting the host’s virtual switch from the vSphere Web Client.

Specifically, ensure that the following are enabled:

Deploying an Egress router Pod

  1. Create a pod configuration using the following:

    Example 1. Example Pod Definition for an Egress router
    apiVersion: v1
    kind: Pod
    metadata:
      name: egress-1
      labels:
        name: egress-1
      annotations:
        pod.network.openshift.io/assign-macvlan: "true" (1)
    spec:
      containers:
      - name: egress-router
        image: registry.access.redhat.com/openshift3/ose-egress-router
        securityContext:
          privileged: true
        env:
        - name: EGRESS_SOURCE (2)
          value: 192.168.12.99
        - name: EGRESS_GATEWAY (3)
          value: 192.168.12.1
        - name: EGRESS_DESTINATION (4)
          value: 203.0.113.25
      nodeSelector:
        site: springfield-1 (5)
    1 The pod.network.openshift.io/assign-macvlan annotation creates a Macvlan network interface on the primary network interface, and then moves it into the pod’s network name space before starting the egress-router container. Preserve the the quotation marks around "true". Omitting them will result in errors.
    2 An IP address from the physical network that the node itself is on and is reserved by the cluster administrator for use by this pod.
    3 Same value as the default gateway used by the node itself.
    4 The external server that to direct traffic to. Using this example, connections to the pod are redirected to 203.0.113.25, with a source IP address of 192.168.12.99.
    5 The pod will only be deployed to nodes with the label site=springfield-1.
  2. Create the pod using the above definition:

    $ oc create -f <pod_name>.json

    To check to see if the pod has been created:

    oc get pod <pod_name>
  3. Ensure other pods can find the pod’s IP address by creating a service to point to the egress router:

    apiVersion: v1
    kind: Service
    metadata:
      name: egress-1
    spec:
      ports:
      - name: http
        port: 80
      - name: https
        port: 443
      type: ClusterIP
      selector:
        name: egress-1

    Your pods can now connect to this service. Their connections are redirected to the corresponding ports on the external server, using the reserved egress IP address.

The pod contains a single container, using the openshift3/ose-egress-router image, and that container is run privileged so that it can configure the Macvlan interface and set up iptables rules.

The environment variables tell the egress-router image what addresses to use; it will configure the Macvlan interface to use EGRESS_SOURCE as its IP address, with EGRESS_GATEWAY as its gateway.

NAT rules are set up so that connections to any TCP or UDP port on the pod’s cluster IP address are redirected to the same port on EGRESS_DESTINATION.

If only some of the nodes in your cluster are capable of claiming the specified source IP address and using the specified gateway, you can specify a nodeName or nodeSelector indicating which nodes are acceptable.

Enabling Failover for Egress router Pods

Using a replication controller, you can ensure that there is always one copy of the egress router pod in order to prevent downtime.

  1. Create a replication controller configuration file using the following:

    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: egress-demo-controller
    spec:
      replicas: 1 (1)
      selector:
        name: egress-demo
      template:
        metadata:
          name: egress-demo
          labels:
            name: egress-demo
          annotations:
            pod.network.openshift.io/assign-macvlan: "true"
        spec:
          containers:
          - name: egress-demo-container
            image: openshift/origin-egress-router
            env:
            - name: EGRESS_SOURCE
              value: 192.168.12.99
            - name: EGRESS_GATEWAY
              value: 192.168.12.1
            - name: EGRESS_DESTINATION
              value: 203.0.113.25
            securityContext:
              privileged: true
          nodeSelector:
            site: springfield-1
    1 Ensure replicas is set to 1, because only one pod can be using a given EGRESS_SOURCE value at any time. This means that only a single copy of the router will be running, on a node with the label site=springfield-1.
  2. Create the pod using the definition:

    $ oc create -f <replication_controller>.json
  3. To verify, check to see if the replication controller pod has been created:

    oc describe rc <replication_controller>

Enabling Multicast

Enabling multicast networking is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the ovs-multitenant or ovs-networkpolicy plugin, you can enable multicast on a per-project basis by setting an annotation on the project’s corresponding netnamespace object:

# oc annotate netnamespace <namespace> \
    netnamespace.network.openshift.io/multicast-enabled=true

Disable multicast by removing the annotation:

# oc annotate netnamespace <namespace> \
    netnamespace.network.openshift.io/multicast-enabled-

When using the ovs-multitenant plugin:

  1. In an isolated project, multicast packets sent by a pod will be delivered to all other pods in the project.

  2. If you have joined networks together, you will need to enable multicast in each project’s netnamespace in order for it to take effect in any of the projects. Multicast packets sent by a pod in a joined network will be delivered to all pods in all of the joined-together networks.

  3. To enable multicast in the default project, you must also enable it in all projects that have been made global. Global projects are not "global" for purposes of multicast; multicast packets sent by a pod in a global project will only be delivered to pods in other global projects, not to all pods in all projects. Likewise, pods in global projects will only receive multicast packets sent from pods in other global projects, not from all pods in all projects.

When using the ovs-networkpolicy plugin:

  1. Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of NetworkPolicy objects. (Pods may be able to communicate over multicast even when they can’t communicate over unicast.)

  2. Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are NetworkPolicy objects allowing communication between the to projects.

Enabling NetworkPolicy

Enabling the Kubernetes NetworkPolicy is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

Kubernetes NetworkPolicy is not currently fully supported by OpenShift Container Platform, and the ovs-subnet and ovs-multitenant plug-ins ignore NetworkPolicy objects. However, a Technology Preview of NetworkPolicy support is available by using the ovs-networkpolicy plug-in.

In a cluster configured to use the ovs-networkpolicy plug-in, network isolation is controlled entirely by NetworkPolicy objects and the associated Namespace annotation. In particular, by default, all projects are able to access pods in all other projects. The project to be isolated must first be configured to opt in to isolation by setting the proper annotation on its Namespace object, and then creating NetworkPolicy objects indicating the incoming connections to be allowed.

Troubleshooting Throughput Issues

Sometimes applications deployed through OpenShift Container Platform can cause network throughput issues such as unusually high latency between specific services.

Use the following methods to analyze performance issues if pod logs do not reveal any cause of the problem:

  • Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node.

    For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to/from a pod. Latency can occur in OpenShift Container Platform if a node interface is overloaded with traffic from other pods, storage devices, or the data plane.

    $ tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> (1)
    1 podip is the IP address for the pod. Run the following command to get the IP address of the pods:
    # oc get pod <podname> -o wide

    tcpdump generates a file at /tmp/dump.pcap containing all traffic between these two pods. Ideally, run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with:

    # tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789
  • Use a bandwidth measuring tool, such as iperf, to measure streaming throughput and UDP throughput. Run the tool from the pods first, then from the nodes to attempt to locate any bottlenecks. The iperf3 tool is included as part of RHEL 7.

For information on installing and using iperf3, see this Red Hat Solution.