$ oc adm pod-network join-projects --to=<project1> <project2> <project3>
This topic describes the management of the overall cluster network, including project isolation and outbound traffic control.
Pod-level networking features, such as per-pod bandwidth limits, are discussed in Managing Pods.
When your cluster is configured to use the ovs-multitenant SDN plug-in, you can manage the separate pod overlay networks for projects using the administrator CLI. See the Configuring the SDN section for plug-in configuration steps, if necessary.
To join projects to an existing project network:
$ oc adm pod-network join-projects --to=<project1> <project2> <project3>
In the above example, all the pods and services in <project2>
and <project3>
can now access any pods and services in <project1>
and vice versa. Services
can be accessed either by IP or fully-qualified DNS name
(<service>.<pod_namespace>.svc.cluster.local
). For example, to access a
service named db
in a project myproject
, use db.myproject.svc.cluster.local
.
Alternatively, instead of specifying specific project names, you can use the
--selector=<project_selector>
option.
To isolate the project network in the cluster and vice versa, run:
$ oc adm pod-network isolate-projects <project1> <project2>
In the above example, all of the pods and services in <project1>
and
<project2>
can not access any pods and services from other non-global
projects in the cluster and vice versa.
Alternatively, instead of specifying specific project names, you can use the
--selector=<project_selector>
option.
To allow projects to access all pods and services in the cluster and vice versa:
$ oc adm pod-network make-projects-global <project1> <project2>
In the above example, all the pods and services in <project1>
and <project2>
can now access any pods and services in the cluster and vice versa.
Alternatively, instead of specifying specific project names, you can use the
--selector=<project_selector>
option.
In OKD, host name collision prevention for routes and ingress objects is enabled by default. This means that users without the cluster-admin role can set the host name in a route or ingress object only on creation and cannot change it afterwards. However, you can relax this restriction on routes and ingress objects for some or all users.
Because OKD uses the object creation timestamp to determine the oldest route or ingress object for a given host name, a route or ingress object can hijack a host name of a newer route if the older route changes its host name, or if an ingress object is introduced. |
As an OKD cluster administrator, you can edit the host name in a route even after creation. You can also create a role to allow specific users to do so:
$ oc create -f - <<EOF apiVersion: v1 kind: ClusterRole metadata: name: route-editor rules: - apiGroups: - route.openshift.io - "" resources: - routes/custom-host verbs: - update EOF
You can then bind the new role to a user:
$ oc adm policy add-cluster-role-to-user route-editor user
You can also disable host name collision prevention for ingress objects. Doing so lets users without the cluster-admin role edit a host name for ingress objects after creation. This is useful to OKD installations that depend upon Kubernetes behavior, including allowing the host names in ingress objects be edited.
Add the following to the master.yaml
file:
admissionConfig:
pluginConfig:
openshift.io/IngressAdmission:
configuration:
apiVersion: v1
allowHostnameChanges: true
kind: IngressAdmissionConfig
location: ""
Restart the master services for the changes to take effect:
$ systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
As a cluster administrator you can allocate a number of static IP addresses to a
specific node at the host level. If an application developer needs a dedicated
IP address for their application service, they can request one during the
process they use to ask for firewall access. They can then deploy an egress
router from the developer’s project, using a nodeSelector
in the deployment
configuration to ensure that the pod lands on the host with the pre-allocated
static IP address.
The egress pod’s deployment declares one of the source IPs, the destination IP
of the protected service, and a gateway IP to reach the destination. After the
pod is deployed, you can
create
a service to access the egress router pod, then add that source IP to the
corporate firewall. The developer then has access information to the egress
router service that was created in their project, for example,
service.project.cluster.domainname.com
.
When the developer needs to access the external, firewalled service, they can
call out to the egress router pod’s service
(service.project.cluster.domainname.com
) in their application (for example,
the JDBC connection information) rather than the actual protected service URL.
As an OKD cluster administrator, you can control egress traffic in these ways:
Using an egress firewall allows you to enforce the acceptable outbound traffic policies, so that specific endpoints or IP ranges (subnets) are the only acceptable targets for the dynamic endpoints (pods within OKD) to talk to.
Using an egress router allows you to create identifiable services to send traffic to certain destinations, ensuring those external destinations treat traffic as though it were coming from a known source. This helps with security, because it allows you to secure an external database so that only specific pods in a namespace can talk to a service (the egress router), which proxies the traffic to your database.
In addition to the above OKD-internal solutions, it is also possible to create iptables rules that will be applied to outgoing traffic. These rules allow for more possibilities than the egress firewall, but cannot be limited to particular projects.
As an OKD cluster administrator, you can use egress firewall policy to limit the external addresses that some or all pods can access from within the cluster, so that:
A pod can only talk to internal hosts, and cannot initiate connections to the public Internet.
Or,
A pod can only talk to the public Internet, and cannot initiate connections to internal hosts (outside the cluster).
Or,
A pod cannot reach specified internal subnets/hosts that it should have no reason to contact.
You can configure projects to have different egress policies. For example,
allowing <project A>
access to a specified IP range, but denying the same
access to <project B>
. Or restrict application developers from updating from
(Python) pip mirrors, and forcing updates to only come from desired sources.
You must have the ovs-multitenant plug-in enabled in order to limit pod access via egress policy. |
Project administrators can neither create EgressNetworkPolicy
objects, nor
edit the ones you create in their project. There are also several other
restrictions on where EgressNetworkPolicy
can be created:
The default
project (and any other project that has been made global via
oc adm pod-network make-projects-global
) cannot have egress policy.
If you merge two projects together (via oc adm pod-network join-projects
),
then you cannot use egress policy in any of the joined projects.
No project may have more than one egress policy object.
Violating any of these restrictions results in broken egress policy for the project, and may cause all external network traffic to be dropped.
Use the oc
command or the REST API to configure egress policy. You can use
oc [create|replace|delete]
to manipulate EgressNetworkPolicy
objects. The
api/swagger-spec/oapi-v1.json file has API-level details on how the objects
actually work.
To configure egress policy:
Navigate to the project you want to affect.
Create a JSON file with the desired policy details. For example:
{ "kind": "EgressNetworkPolicy", "apiVersion": "v1", "metadata": { "name": "default" }, "spec": { "egress": [ { "type": "Allow", "to": { "cidrSelector": "1.2.3.0/24" } }, { "type": "Allow", "to": { "dnsName": "www.foo.com" } }, { "type": "Deny", "to": { "cidrSelector": "0.0.0.0/0" } } ] } }
When the example above is added to a project, it allows traffic to IP range
1.2.3.0/24
and domain name www.foo.com
, but denies access to all other
external IP addresses. Traffic to other pods is not affected because the policy
only applies to external traffic.
The rules in an EgressNetworkPolicy
are checked in order, and the first one
that matches takes effect. If the three rules in the above example were
reversed, then traffic would not be allowed to 1.2.3.0/24
and www.foo.com
because the 0.0.0.0/0
rule would be checked first, and it would match and deny
all traffic.
Domain name updates are polled based on the TTL (time to live) value of the
domain returned by the local non-authoritative servers. The pod should also
resolve the domain from the same local nameservers when necessary, otherwise
the IP addresses for the domain perceived by the egress network policy controller
and the pod will be different, and the egress network policy may not be enforced
as expected. Since egress network policy controller and pod are asynchronously
polling the same local nameserver, there could be a race condition where pod may
get the updated IP before the egress controller. Due to this current limitation,
domain name usage in EgressNetworkPolicy
is only recommended for domains with
infrequent IP address changes.
The egress firewall always allows pods access to the external interface of the node the pod is on for DNS resolution. If your DNS resolution is not handled by something on the local node, then you will need to add egress firewall rules allowing access to the DNS server’s IP addresses if you are using domain names in your pods. The default installer sets up a local dnsmasq, so if you are using that setup you will not need to add extra rules. |
Use the JSON file to create an EgressNetworkPolicy object:
# oc create -f <policy>.json
Exposing services by creating
routes will ignore
|
The OKD egress router runs a service that redirects traffic to a specified remote server, using a private source IP address that is not used for anything else. The service allows pods to talk to servers that are set up to only allow access from whitelisted IP addresses.
The egress router is not intended for every outgoing connection. Creating large numbers of egress routers can push the limits of your network hardware. For example, creating an egress router for every project or application could exceed the number of local MAC addresses that the network interface can handle before falling back to filtering MAC addresses in software. |
Currently, the egress router is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. |
Deployment Considerations
The Egress router adds a second IP address and MAC address to the node’s primary network interface. If you are not running OKD on bare metal, you may need to configure your hypervisor or cloud provider to allow the additional address.
If you are deploying OKD on Red Hat OpenStack Platform, you need to whitelist the IP and MAC addresses on your OpenStack environment, otherwise communication will fail:
neutron port-update $neutron_port_uuid \ --allowed_address_pairs list=true \ type=dict mac_address=<mac_address>,ip_address=<ip_address>
If you are using
Red
Hat Enterprise Virtualization, you should set
EnableMACAntiSpoofingFilterRules
to false
.
If you are using VMware vSphere, see the VMWare documentation for securing vSphere standard switches. View and change VMWare vSphere default settings by selecting the host’s virtual switch from the vSphere Web Client.
Specifically, ensure that the following are enabled:
Egress Router Modes
The egress router can run in two different modes: redirect mode and HTTP proxy mode. Redirect mode works for all services except for HTTP and HTTPS. For HTTP and HTTPS services, use HTTP proxy mode.
In redirect mode, the egress router sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that want to make use of the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP.
Create a pod configuration using the following:
apiVersion: v1
kind: Pod
metadata:
name: egress-1
labels:
name: egress-1
annotations:
pod.network.openshift.io/assign-macvlan: "true" (1)
spec:
initContainers:
- name: egress-router
image: openshift/origin-egress-router
securityContext:
privileged: true
env:
- name: EGRESS_SOURCE (2)
value: 192.168.12.99
- name: EGRESS_GATEWAY (3)
value: 192.168.12.1
- name: EGRESS_DESTINATION (4)
value: 203.0.113.25
- name: EGRESS_ROUTER_MODE (5)
value: init
containers:
- name: egress-router-wait
image: openshift/origin-pod
nodeSelector:
site: springfield-1 (6)
1 | The pod.network.openshift.io/assign-macvlan annotation creates a Macvlan
network interface on the primary network interface, and then moves it into the
pod’s network name space before starting the egress-router container. Preserve
the quotation marks around "true" . Omitting them results in errors. |
2 | IP address from the physical network that the node is on and is reserved by the cluster administrator for use by this pod. |
3 | Same value as the default gateway used by the node. |
4 | The external server to direct traffic to. Using this example, connections to the pod are redirected to 203.0.113.25, with a source IP address of 192.168.12.99. |
5 | This tells the egress router image that it is being deployed as an "init container". Previous versions of OKD (and the egress router image) did not support this mode and had to be run as an ordinary container. |
6 | The pod is only deployed to nodes with the label site=springfield-1 . |
Create the pod using the above definition:
$ oc create -f <pod_name>.json
To check to see if the pod has been created:
oc get pod <pod_name>
Ensure other pods can find the pod’s IP address by creating a service to point to the egress router:
apiVersion: v1
kind: Service
metadata:
name: egress-1
spec:
ports:
- name: http
port: 80
- name: https
port: 443
type: ClusterIP
selector:
name: egress-1
Your pods can now connect to this service. Their connections are redirected to the corresponding ports on the external server, using the reserved egress IP address.
The egress router setup is performed by an "init container" created from the
openshift/origin-egress-router
image, and that container is run privileged so that it can configure the Macvlan
interface and set up iptables
rules. After it finishes setting up
the iptables
rules, it exits and the
openshift/origin-pod
container will run (doing nothing) until the pod is killed.
The environment variables tell the egress-router image what addresses to use; it
will configure the Macvlan interface to use EGRESS_SOURCE
as its IP address,
with EGRESS_GATEWAY
as its gateway.
NAT rules are set up so that connections to any TCP or UDP port on the
pod’s cluster IP address are redirected to the same port on
EGRESS_DESTINATION
.
If only some of the nodes in your cluster are capable of claiming the specified
source IP address and using the specified gateway, you can specify a
nodeName
or nodeSelector
indicating which nodes are acceptable.
In the previous example, connections to the egress pod (or its corresponding service) on any port are redirected to a single destination IP. You can also configure different destination IPs depending on the port:
apiVersion: v1
kind: Pod
metadata:
name: egress-multi
labels:
name: egress-multi
annotations:
pod.network.openshift.io/assign-macvlan: "true"
spec:
initContainers:
- name: egress-router
image: openshift/origin-egress-router
securityContext:
privileged: true
env:
- name: EGRESS_SOURCE
value: 192.168.12.99
- name: EGRESS_GATEWAY
value: 192.168.12.1
- name: EGRESS_DESTINATION
value: | (1)
80 tcp 203.0.113.25
8080 tcp 203.0.113.26 80
8443 tcp 203.0.113.26 443
203.0.113.27
- name: EGRESS_ROUTER_MODE
value: init
containers:
- name: egress-router-wait
image: openshift/origin-pod
1 | This uses the YAML syntax for a multi-line string; see below for details. |
Each line of EGRESS_DESTINATION
can be one of three types:
<port> <protocol> <IP address>
- This says that incoming
connections to the given <port>
should be redirected to the same
port on the given <IP address>
. <protocol>
is either tcp
or
udp
. In the example above, the first line redirects traffic from
local port 80 to port 80 on 203.0.113.25.
<port> <protocol> <IP address> <remote port>
- As above, except
that the connection is redirected to a different <remote port>
on
<IP address>
. In the example above, the second and third lines
redirect local ports 8080 and 8443 to remote ports 80 and 443 on
203.0.113.26.
<fallback IP address>
- If the last line of EGRESS_DESTINATION
is a single IP address, then any connections on any other port will be
redirected to the corresponding port on that IP address (eg,
203.0.113.27 in the example above). If there is no fallback IP address
then connections on other ports would simply be rejected.)
For a large or frequently-changing set of destination mappings, you can use a configmap to externally maintain the list, and have the egress router pod read it from there. This comes with the advantage of project administrators being able to edit the configmap, whereas they may not be able to edit the Pod definition directly, because it contains a privileged container.
Create a file containing the EGRESS_DESTINATION
data:
$ cat my-egress-destination.txt # Egress routes for Project "Test", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27
Note that you can put blank lines and comments into this file
Create a configmap object from the file:
$ oc delete configmap egress-routes --ignore-not-found $ oc create configmap egress-routes \ --from-file=destination=my-egress-destination.txt
Here egress-routes
is the name of the configmap object being
created and my-egress-destination.txt
is the name of the file the
data is being read from.
Create a egress router pod definition as above, but specifying the
configmap for EGRESS_DESTINATION
in the environment section:
...
env:
- name: EGRESS_SOURCE
value: 192.168.12.99
- name: EGRESS_GATEWAY
value: 192.168.12.1
- name: EGRESS_DESTINATION
valueFrom:
configmapKeyRef:
name: egress-routes
key: destination
- name: EGRESS_ROUTER_MODE
value: init
...
The egress router does not automatically update when the configmap changes. Restart the pod to get updates. |
In HTTP proxy mode, the egress router runs as an HTTP proxy on port 8080
.
This only works for clients talking to HTTP or HTTPS-based services, but usually
requires fewer changes to the client pods to get them to work. Programs can be
told to use an HTTP proxy by setting an environment variable.
Create the pod using the following as an example:
apiVersion: v1
kind: Pod
metadata:
name: egress-http-proxy
labels:
name: egress-http-proxy
annotations:
pod.network.openshift.io/assign-macvlan: "true" (1)
spec:
initContainers:
- name: egress-router-setup
image: openshift/origin-egress-router
securityContext:
privileged: true
env:
- name: EGRESS_SOURCE (2)
value: 192.168.12.99
- name: EGRESS_GATEWAY (3)
value: 192.168.12.1
- name: EGRESS_ROUTER_MODE (4)
value: http-proxy
containers:
- name: egress-router-proxy
image: openshift/origin-egress-router-http-proxy
env:
- name: EGRESS_HTTP_PROXY_DESTINATION (5)
value: |
!*.example.com
!192.168.1.0/24
*
1 | The pod.network.openshift.io/assign-macvlan annotation creates a Macvlan
network interface on the primary network interface, then moves it into the
pod’s network name space before starting the egress-router container. Preserve
the quotation marks around "true" . Omitting them results in errors. |
2 | An IP address from the physical network that the node itself is on and is reserved by the cluster administrator for use by this pod. |
3 | Same value as the default gateway used by the node itself. |
4 | This tells the egress router image that it is being deployed as part of an HTTP proxy, and so it should not set up iptables redirecting rules. |
5 | A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container. |
You can specify any of the following for the EGRESS_HTTP_PROXY_DESTINATION
value. You can also use *
, meaning "allow connections to all remote
destinations". Each line in the configuration specifies one group of connections
to allow or deny:
An IP address (eg, 192.168.1.1
) allows connections to that IP address.
A CIDR range (eg, 192.168.1.0/24
) allows connections to that CIDR range.
A host name (eg, www.example.com
) allows proxying to that host.
A domain name preceded by *.
(eg, *.example.com
) allows proxying to that domain and all of its subdomains.
A !
followed by any of the above denies connections rather than allowing them
If the last line is *
, then anything that hasn’t been denied will be allowed. Otherwise, anything that hasn’t been allowed will be denied.
Ensure other pods can find the pod’s IP address by creating a service to point to the egress router:
apiVersion: v1
kind: Service
metadata:
name: egress-1
spec:
ports:
- name: http-proxy
port: 8080 (1)
type: ClusterIP
selector:
name: egress-1
1 | Ensure the http port is always set to 8080 . |
Configure the client pod (not the egress proxy pod) to use the HTTP proxy by setting the http_proxy
or https_proxy
variables:
...
env:
- name: http_proxy
value: http://egress-1:8080/ (1)
- name: https_proxy
value: http://egress-1:8080/
...
1 | The service created in step 2. |
Using the |
You can also specify the EGRESS_HTTP_PROXY_DESTINATION
using a
configmap, similarly to
the redirecting egress router example above.
Using a replication controller, you can ensure that there is always one copy of the egress router pod in order to prevent downtime.
Create a replication controller configuration file using the following:
apiVersion: v1 kind: ReplicationController metadata: name: egress-demo-controller spec: replicas: 1 (1) selector: name: egress-demo template: metadata: name: egress-demo labels: name: egress-demo annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-demo-init image: openshift/origin-egress-router env: - name: EGRESS_SOURCE value: 192.168.12.99 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: 203.0.113.25 - name: EGRESS_ROUTER_MODE value: init securityContext: privileged: true containers: - name: egress-demo-wait image: openshift/origin-pod nodeSelector: site: springfield-1
1 | Ensure replicas is set to 1 , because only one pod can be using a given
EGRESS_SOURCE value at any time. This means that only a single copy of the
router will be running, on a node with the label site=springfield-1 . |
Create the pod using the definition:
$ oc create -f <replication_controller>.json
To verify, check to see if the replication controller pod has been created:
oc describe rc <replication_controller>
Some cluster administrators may want to perform actions on outgoing
traffic that do not fit within the model of EgressNetworkPolicy
or the
egress router. In some cases, this can be done by creating iptables
rules directly.
For example, you could create rules that log traffic to particular destinations, or to prevent more than a certain number of outgoing connections per second.
OKD does not provide a way to add custom iptables rules
automatically, but it does provide a place where such rules can be
added manually by the administrator. Each node, on startup, will
create an empty chain called OPENSHIFT-ADMIN-OUTPUT-RULES
in the
filter
table (assuming that the chain does not already exist). Any
rules added to that chain by an administrator will be applied to all
traffic going from a pod to a destination outside the cluster (and not
to any other traffic).
There are a few things to watch out for when using this functionality:
It is up to you to ensure that rules get created on each node; OKD does not provide any way to make that happen automatically.
The rules are not applied to traffic that exits the cluster via an
egress router, and they run after EgressNetworkPolicy
rules are applied
(and so will not see traffic that is denied by an
EgressNetworkPolicy
).
The handling of connections from pods to nodes or pods to the master is complicated, because nodes have both "external" IP addresses and "internal" SDN IP addresses. Thus, some pod-to-node/master traffic may pass through this chain, but other pod-to-node/master traffic may bypass it.
At this time, multicast is best used for low bandwidth coordination or service discovery and not a high-bandwidth solution. |
Multicast traffic between OKD pods is disabled by default. If you
are using the ovs-multitenant or ovs-networkpolicy plugin, you can enable
multicast on a per-project basis by setting an annotation on the project’s
corresponding netnamespace
object:
# oc annotate netnamespace <namespace> \ netnamespace.network.openshift.io/multicast-enabled=true
Disable multicast by removing the annotation:
# oc annotate netnamespace <namespace> \ netnamespace.network.openshift.io/multicast-enabled-
When using the ovs-multitenant plugin:
In an isolated project, multicast packets sent by a pod will be delivered to all other pods in the project.
If you have
joined
networks together, you will need to enable multicast in each project’s
netnamespace
in order for it to take effect in any of the projects. Multicast
packets sent by a pod in a joined network will be delivered to all pods in all
of the joined-together networks.
To enable multicast in the default
project, you must also enable it
in all projects that have been
made
global. Global projects are not "global" for purposes of multicast; multicast
packets sent by a pod in a global project will only be delivered to pods in
other global projects, not to all pods in all projects. Likewise, pods in global
projects will only receive multicast packets sent from pods in other global
projects, not from all pods in all projects.
When using the ovs-networkpolicy plugin:
Multicast packets sent by a pod will be delivered to all other pods in the
project, regardless of NetworkPolicy
objects. (Pods may be able to communicate
over multicast even when they can’t communicate over unicast.)
Multicast packets sent by a pod in one project will never be delivered to pods
in any other project, even if there are NetworkPolicy
objects allowing
communication between the to projects.
Enabling the Kubernetes |
Kubernetes NetworkPolicy
is not currently fully supported by OKD,
and the ovs-subnet and ovs-multitenant plug-ins ignore NetworkPolicy
objects. However, a Technology Preview of NetworkPolicy
support is available
by using the ovs-networkpolicy plug-in.
In a cluster
configured
to use the ovs-networkpolicy plug-in, network isolation is controlled
entirely by
NetworkPolicy
objects. By default, all pods in a project are accessible from other pods and network endpoints. To isolate
one or more pods in a project, you can create NetworkPolicy
objects in that
project to indicate the allowed incoming connections. Project administrators can
create and delete NetworkPolicy
objects within their own project.
Pods that do not have NetworkPolicy
objects pointing to them are fully
accessible, whereas, pods that have one or more NetworkPolicy
objects pointing
to them are isolated. These isolated pods only accept connections that are
accepted by at least one of their NetworkPolicy
objects.
Following are a few sample NetworkPolicy
object definitions supporting
different scenrios:
Deny All Traffic
To make a project "deny by default" add a NetworkPolicy
object that
matches all pods but accepts no traffic.
networkConfig:
...
networkPluginName: "redhat/openshift-ovs-networkpolicy" (1)
...
1 | Set to redhat/openshift-ovs-networkpolicy for the ovs-networkpolicy plug-in |
Only Accept connections from pods within project
To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects:
networkConfig:
...
networkPluginName: "redhat/openshift-ovs-networkpolicy" (1)
1 | Set to redhat/openshift-ovs-networkpolicy for the ovs-networkpolicy plug-in |
Only allow HTTP and HTTPS traffic based on pod labels
To enable only HTTP and HTTPS access to the pods with a specific label
(role=frontend
in following example), add a NetworkPolicy
object similar to:
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: allow-http-and-https
spec:
podSelector:
matchLabels:
role: frontend
ingress:
- ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
NetworkPolicy
objects are additive, which means you can combine multiple
NetworkPolicy
objects together to satisfy complex network requirements.
For example, for the NetworkPolicy
objects defined in previous samples, you
can define both allow-same-namespace
and allow-http-and-https
policies
within the same project. Thus allowing the pods with the label role=frontend
,
to accept any connection allowed by each policy. That is, connections on any
port from pods in the same namespace, and connections on ports 80
and
443
from pods in any namespace.
When using the ovs-multitenant plug-in, router traffic is automatically allowed into all namespaces, because the routers are normally in the default namespace, and all namespaces allow connections from pods in that namespace. This does not happen automatically when using the Networkpolicy plug-in, so if you have a policy that isolates a namespace by default, you will need to take additional steps to allow routers access.
Create a policy for each service, allowing access from all sources:
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: allow-to-database-service
spec:
podSelector:
matchLabels:
role: database
ingress:
- ports:
- protocol: TCP
port: 5432
This allows routers to access the service. However, this also allows pods in other users' namespaces to access it as well. In general, this should not be a problem, because those pods could normally access the service via the public router anyway.
Alternatively, you can create a policy allowing full access from the default namespace, as in the ovs-multitenant plug-in:
First, as a cluster administrator, add a label to the default namespace so it can be matched:
If you labeled the default project with the |
$ oc label namespace default name=default
Create policies allowing connections from that namespace.
Perform this step for each namespace you want to allow connections into. Users with the Project Administrator role can create policies. |
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: allow-from-default-namespace
spec:
podSelector:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: default
Cluster administrators can modify the default project template to enable
automatic creation of default NetworkPolicy
objects (one or more), whenever a
new project is created. To do this:
Create a custom project template and configure the master to use it, as described in Modifying the Template for New Projects.
Label the default
project with the default
label:
If you labeled the default project with the |
$ oc label namespace default name=default
Edit the template to include the desired NetworkPolicy
objects:
$ oc edit template project-request -n default
To include |
Add each default policy as an element in the objects
array:
objects:
...
- apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: allow-same-namespace
spec:
podSelector:
ingress:
- from:
- podSelector: {}
...
Sometimes applications deployed through OKD can cause network throughput issues such as unusually high latency between specific services.
Use the following methods to analyze performance issues if pod logs do not reveal any cause of the problem:
Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node.
For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to/from a pod. Latency can occur in OKD if a node interface is overloaded with traffic from other pods, storage devices, or the data plane.
$ tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> (1)
1 | podip is the IP address for the pod. Run the following command to get the IP address of the pods: |
# oc get pod <podname> -o wide
tcpdump generates a file at /tmp/dump.pcap containing all traffic between these two pods. Ideally, run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with:
# tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789
Use a bandwidth measuring tool, such as iperf, to measure streaming throughput and UDP throughput. Run the tool from the pods first, then from the nodes to attempt to locate any bottlenecks. The iperf3 tool is included as part of RHEL 7.