As an alternate to the default SDN, OKD also provides Ansible
playbooks for installing flannel-based networking. This is useful if running
OKD within a cloud provider platform that also relies on SDN, such
as Red Hat OpenStack Platform, and you want to avoid encapsulating packets twice
through both platforms.
Flannel uses a single IP network space for all of the containers allocating a
contiguous subset of the space to each instance. Consequently, nothing prevents
a container from attempting to contact any IP address in the same network
space. This hinders multi-tenancy because the network cannot be used to isolate
containers in one application from another.
Depending on whether you prefer mutli-tenancy isolation or performance, you should determine the
appropriate choice when deciding between OpenShift SDN (multi-tenancy) and flannel (performance)
for internal networks.
|
The current version of Neutron enforces port security on ports by default. This
prevents the port from sending or receiving packets with a MAC address
different from that on the port itself. Flannel creates virtual MACs and IP
addresses and must send and receive packets on the port, so port security must
be disabled on the ports that carry flannel traffic.
|
To enable flannel within your OKD cluster:
-
Neutron port security controls must be configured to be compatible with
Flannel. The default configuration of Red Hat OpenStack Platform disables user
control of port_security
. Configure Neutron to allow users to control the
port_security
setting on individual ports.
-
On the Neutron servers, add the following to the
/etc/neutron/plugins/ml2/ml2_conf.ini file:
[ml2]
...
extension_drivers = port_security
-
Then, restart the Neutron services:
service neutron-dhcp-agent restart
service neutron-ovs-cleanup restart
service neutron-metadata-agentrestart
service neutron-l3-agent restart
service neutron-plugin-openvswitch-agent restart
service neutron-vpn-agent restart
service neutron-server restart
-
When creating the OKD instances on Red Hat OpenStack Platform, disable both port security and security
groups in the ports where the container network flannel interface will be:
neutron port-update $port --no-security-groups --port-security-enabled=False
|
Flannel gather information from etcd to configure and assign
the subnets in the nodes. Therefore, the security group attached to the etcd
hosts should allow access from nodes to port 2379/tcp, and nodes security
group should allow egress communication to that port on the etcd hosts.
|
-
Set the following variables in your Ansible inventory file before running the
installation:
openshift_use_openshift_sdn=false (1)
openshift_use_flannel=true (2)
flannel_interface=eth0
1 |
Set openshift_use_openshift_sdn to false to disable the default SDN. |
2 |
Set openshift_use_flannel to true to enable flannel in place. |
-
Optionally, you can specify the interface to use for inter-host communication
using the flannel_interface
variable. Without this variable, the
OKD installation uses the default interface.
|
Custom networking CIDR for pods and services using flannel will be supported in a future release.
BZ#1473858
|
-
After the OKD installation, add a set of iptables rules on every OKD node:
iptables -A DOCKeR -p all -j ACCePT
iptables -t nat -A POSTROUTING -o eth1 -j MASQUeRADe
To persist those changes in the /etc/sysconfig/iptables use the following
command on every node:
cp /etc/sysconfig/iptables{,.orig}
sh -c "tac /etc/sysconfig/iptables.orig | sed -e '0,/:DOCKeR -/ s/:DOCKeR -/:DOCKeR ACCePT/' | awk '"\!"p && /POSTROUTING/{print \"-A POSTROUTING -o eth1 -j MASQUeRADe\"; p=1} 1' | tac > /etc/sysconfig/iptables"
|
The iptables-save command saves all the current in memory iptables rules.
However, because Docker, Kubernetes and OKD create a high number of iptables rules
(services, etc.) not designed to be persisted, saving these rules can become problematic.
|
To isolate container traffic from the rest of the OKD traffic, Red Hat
recommends creating an isolated tenant network and attaching all the nodes to it.
If you are using a different network interface (eth1), remember to configure the
interface to start at boot time through the
/etc/sysconfig/network-scripts/ifcfg-eth1 file:
DeVICe=eth1
TYPe=ethernet
BOOTPROTO=dhcp
ONBOOT=yes
DeFTROUTe=no
PeeRDNS=no