IP capacity = public cloud default capacity - sum(current IP assignments)
As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace.
The OKD egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network.
For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server.
An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations.
In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project.
egress IP addresses must not be configured in any Linux network configuration files, such as |
Support for the egress IP address functionality on various platforms is summarized in the following table:
Platform | Supported |
---|---|
Bare metal |
Yes |
VMware vSphere |
Yes |
OpenStack |
Yes |
Amazon Web Services (AWS) |
Yes |
Google Cloud Platform (GCP) |
Yes |
Microsoft Azure |
Yes |
The assignment of egress IP addresses to control plane nodes with the egressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). (BZ#2039656) |
For clusters provisioned on public cloud infrastructure, there is a constraint on the absolute number of assignable IP addresses per node. The maximum number of assignable IP addresses per node, or the IP capacity, can be described in the following formula:
IP capacity = public cloud default capacity - sum(current IP assignments)
While the egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, for a cluster installed on bare-metal infrastructure with 8 nodes you can configure 150 egress IP addresses. However, if a public cloud provider limits IP address capacity to 10 IP addresses per node, the total number of assignable IP addresses is only 80. To achieve the same IP address capacity in this example cloud provider, you would need to allocate 7 additional nodes.
To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml
command. The cloud.network.openshift.io/egress-ipconfig
annotation includes capacity and subnet information for the node.
The annotation value is an array with a single object with fields that provide the following information for the primary network interface:
interface
: Specifies the interface ID on AWS and Azure and the interface name on GCP.
ifaddr
: Specifies the subnet mask for one or both IP address families.
capacity
: Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses.
Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in OKD 4.12.
The OpenStack egress IP address feature creates a Neutron reservation port called |
When an OpenStack cluster administrator assigns a floating IP to the reservation port, OKD cannot delete the reservation port. The |
The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability.
cloud.network.openshift.io/egress-ipconfig
annotation on AWScloud.network.openshift.io/egress-ipconfig: [
{
"interface":"eni-078d267045138e436",
"ifaddr":{"ipv4":"10.0.128.0/18"},
"capacity":{"ipv4":14,"ipv6":15}
}
]
cloud.network.openshift.io/egress-ipconfig
annotation on GCPcloud.network.openshift.io/egress-ipconfig: [
{
"interface":"nic0",
"ifaddr":{"ipv4":"10.0.128.0/18"},
"capacity":{"ip":14}
}
]
The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation.
On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type
On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity.
The following capacity limits exist for IP aliasing assignment:
Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 100.
Per VPC, the maximum number of IP aliases is unspecified, but OKD scalability testing reveals the maximum to be approximately 15,000.
For more information, see Per instance quotas and Alias IP ranges overview.
On Azure, the following capacity limits exist for IP address assignment:
Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256.
Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536.
For more information, see Networking limits.
To assign one or more egress IPs to a namespace or specific pods in a namespace, the following conditions must be satisfied:
At least one node in your cluster must have the k8s.ovn.org/egress-assignable: ""
label.
An egressIP
object exists that defines one or more egress IP addresses to use as the source IP address for traffic leaving the cluster from pods in a namespace.
If you create To ensure that egress IP addresses are widely distributed across nodes in the cluster, always apply the label to the nodes you intent to host the egress IP addresses before creating any |
When creating an egressIP
object, the following conditions apply to nodes that are labeled with the k8s.ovn.org/egress-assignable: ""
label:
An egress IP address is never assigned to more than one node at a time.
An egress IP address is equally balanced between available nodes that can host the egress IP address.
If the spec.egressIPs
array in an egressIP
object specifies more than one IP address, the following conditions apply:
No node will ever host more than one of the specified IP addresses.
Traffic is balanced roughly equally between the specified IP addresses for a given namespace.
If a node becomes unavailable, any egress IP addresses assigned to it are automatically reassigned, subject to the previously described conditions.
When a pod matches the selector for multiple egressIP
objects, there is no guarantee which of the egress IP addresses that are specified in the egressIP
objects is assigned as the egress IP address for the pod.
Additionally, if an egressIP
object specifies multiple egress IP addresses, there is no guarantee which of the egress IP addresses might be used. For example, if a pod matches a selector for an egressIP
object with two egress IP addresses, 10.10.20.1
and 10.10.20.2
, either might be used for each TCP connection or UDP conversation.
The following diagram depicts an egress IP address configuration. The diagram describes four pods in two different namespaces running on three nodes in a cluster. The nodes are assigned IP addresses from the 192.168.126.0/18
CIDR block on the host network.
Both Node 1 and Node 3 are labeled with k8s.ovn.org/egress-assignable: ""
and thus available for the assignment of egress IP addresses.
The dashed lines in the diagram depict the traffic flow from pod1, pod2, and pod3 traveling through the pod network to egress the cluster from Node 1 and Node 3. When an external service receives traffic from any of the pods selected by the example egressIP
object, the source IP address is either 192.168.126.10
or 192.168.126.102
. The traffic is balanced roughly equally between these two nodes.
The following resources from the diagram are illustrated in detail:
Namespace
objectsThe namespaces are defined in the following manifest:
apiVersion: v1
kind: Namespace
metadata:
name: namespace1
labels:
env: prod
---
apiVersion: v1
kind: Namespace
metadata:
name: namespace2
labels:
env: prod
egressIP
objectThe following egressIP
object describes a configuration that selects all pods in any namespace with the env
label set to prod
. The egress IP addresses for the selected pods are 192.168.126.10
and 192.168.126.102
.
egressIP
objectapiVersion: k8s.ovn.org/v1
kind: egressIP
metadata:
name: egressips-prod
spec:
egressIPs:
- 192.168.126.10
- 192.168.126.102
namespaceSelector:
matchLabels:
env: prod
status:
items:
- node: node1
egressIP: 192.168.126.10
- node: node3
egressIP: 192.168.126.102
For the configuration in the previous example, OKD assigns both egress IP addresses to the available nodes. The status
field reflects whether and where the egress IP addresses are assigned.
The following YAML describes the API for the egressIP
object. The scope of the object is cluster-wide; it is not created in a namespace.
apiVersion: k8s.ovn.org/v1
kind: egressIP
metadata:
name: <name> (1)
spec:
egressIPs: (2)
- <ip_address>
namespaceSelector: (3)
...
podSelector: (4)
...
1 | The name for the egressIPs object. |
2 | An array of one or more IP addresses. |
3 | One or more selectors for the namespaces to associate the egress IP addresses with. |
4 | Optional: One or more selectors for pods in the specified namespaces to associate egress IP addresses with. Applying these selectors allows for the selection of a subset of pods within a namespace. |
The following YAML describes the stanza for the namespace selector:
namespaceSelector: (1)
matchLabels:
<label_name>: <label_value>
1 | One or more matching rules for namespaces. If more than one match rule is provided, all matching namespaces are selected. |
The following YAML describes the optional stanza for the pod selector:
podSelector: (1)
matchLabels:
<label_name>: <label_value>
1 | Optional: One or more matching rules for pods in the namespaces that match the specified namespaceSelector rules. If specified, only pods that match are selected. Others pods in the namespace are not selected. |
In the following example, the egressIP
object associates the 192.168.126.11
and 192.168.126.102
egress IP addresses with pods that have the app
label set to web
and are in the namespaces that have the env
label set to prod
:
egressIP
objectapiVersion: k8s.ovn.org/v1
kind: egressIP
metadata:
name: egress-group1
spec:
egressIPs:
- 192.168.126.11
- 192.168.126.102
podSelector:
matchLabels:
app: web
namespaceSelector:
matchLabels:
env: prod
In the following example, the egressIP
object associates the 192.168.127.30
and 192.168.127.40
egress IP addresses with any pods that do not have the environment
label set to development
:
egressIP
objectapiVersion: k8s.ovn.org/v1
kind: egressIP
metadata:
name: egress-group2
spec:
egressIPs:
- 192.168.127.30
- 192.168.127.40
namespaceSelector:
matchExpressions:
- key: environment
operator: NotIn
values:
- development
As a feature of egress IP, the reachabilityTotalTimeoutSeconds
parameter configures the egressIP node reachability check total timeout in seconds. If the egressIP node cannot be reached within this timeout, the node is declared down.
You can set a value for the reachabilityTotalTimeoutSeconds
in the configuration file for the egressIPConfig
object. Setting a large value might cause the egressIP implementation to react slowly to node changes. The implementation reacts slowly for egressIP nodes that have an issue and are unreachable.
If you omit the reachabilityTotalTimeoutSeconds
parameter from the egressIPConfig
object, the platform chooses a reasonable default value, which is subject to change over time. The current default is 1
second. A value of 0
disables the reachability check for the egressIP node.
The following egressIPConfig
object describes changing the reachabilityTotalTimeoutSeconds
from the default 1
second probes to 5
second probes:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
defaultNetwork:
ovnKubernetesConfig:
egressIPConfig: (1)
reachabilityTotalTimeoutSeconds: 5 (2)
gatewayConfig:
routingViaHost: false
genevePort: 6081
1 | The egressIPConfig holds the configurations for the options of the egressIP object. By changing these configurations, you can extend the egressIP object. |
2 | The value for reachabilityTotalTimeoutSeconds accepts integer values from 0 to 60 . A value of 0 disables the reachability check of the egressIP node. Setting a value from 1 to 60 corresponds to the timeout in seconds for a probe to send the reachability check to the node. |
You can apply the k8s.ovn.org/egress-assignable=""
label to a node in your cluster so that OKD can assign one or more egress IP addresses to the node.
Install the OpenShift CLI (oc
).
Log in to the cluster as a cluster administrator.
To label a node so that it can host one or more egress IP addresses, enter the following command:
$ oc label nodes <node_name> k8s.ovn.org/egress-assignable="" (1)
1 | The name of the node to label. |
You can alternatively apply the following YAML to add the label to a node:
|