This is a cache of https://docs.openshift.com/container-platform/4.16/networking/network_security/egress_firewall/configuring-egress-firewall-ovn.html. It is a snapshot of the page at 2024-11-19T09:20:47.250+0000.
Configuring an egress firewall for a project - Network security | Networking | OpenShift Container Platform 4.16
×

How an egress firewall works in a project

As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios:

  • A pod can only connect to internal hosts and cannot initiate connections to the public internet.

  • A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster.

  • A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster.

  • A pod can connect to only specific external hosts.

For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources.

Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules.

You configure an egress firewall policy by creating an EgressFirewall custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria:

  • An IP address range in CIDR format

  • A dns name that resolves to an IP address

  • A port number

  • A protocol that is one of the following protocols: TCP, UDP, and SCTP

If your egress firewall includes a deny rule for 0.0.0.0/0, access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address or use the nodeSelector type allow rule in your egress policy rules to connect to API servers.

The following example illustrates the order of the egress firewall rules necessary to ensure API server access:

apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
  name: default
  namespace: <namespace> (1)
spec:
  egress:
  - to:
      cidrSelector: <api_server_address_range> (2)
    type: Allow
# ...
  - to:
      cidrSelector: 0.0.0.0/0 (3)
    type: Deny
1 The namespace for the egress firewall.
2 The IP address range that includes your OpenShift Container Platform API servers.
3 A global deny rule prevents access to the OpenShift Container Platform API servers.

To find the IP address for your API servers, run oc get ep kubernetes -n default.

For more information, see BZ#1988324.

Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination.

Limitations of an egress firewall

An egress firewall has the following limitations:

  • No project can have more than one EgressFirewall object.

  • A maximum of one EgressFirewall object with a maximum of 8,000 rules can be defined per project.

  • If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.

Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization.

An Egress Firewall resource can be created in the kube-node-lease, kube-public, kube-system, openshift and openshift- projects.

Matching order for egress firewall policy rules

The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection.

How Domain Name Server (dns) resolution works

If you use dns names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions:

  • Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 minutes. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL and the TTL is less than 30 minutes, the controller sets the duration for that dns name to the returned value. Each dns name is queried after the TTL for the dns record expires.

  • The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently.

  • Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressFirewall objects is only recommended for domains with infrequent IP address changes.

Using dns names in your egress firewall policy does not affect local dns resolution through Coredns.

However, if your egress firewall policy uses domain names, and an external dns server handles dns resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your dns server.

Improved dns resolution and resolving wildcard domain names

There might be situations where the IP addresses associated with a dns record change frequently, or you might want to specify wildcard domain names in your egress firewall policy rules.

In this situation, the OVN-Kubernetes cluster manager creates a dnsNameResolver custom resource object for each unique dns name used in your egress firewall policy rules. This custom resource stores the following information:

Improved dns resolution for egress firewall rules is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Example dnsNameResolver CR definition
apiVersion: networking.openshift.io/v1alpha1
kind: dnsNameResolver
spec:
  name: www.example.com. (1)
status:
  resolvedNames:
  - dnsName: www.example.com. (2)
    resolvedAddress:
    - ip: "1.2.3.4" (3)
      ttlSeconds: 60 (4)
      lastLookupTime: "2023-08-08T15:07:04Z" (5)
1 The dns name. This can be either a standard dns name or a wildcard dns name. For a wildcard dns name, the dns name resolution information contains all of the dns names that match the wildcard dns name.
2 The resolved dns name matching the spec.name field. If the spec.name field contains a wildcard dns name, then multiple dnsName entries are created that contain the standard dns names that match the wildcard dns name when resolved. If the wildcard dns name can also be successfully resolved, then this field also stores the wildcard dns name.
3 The current IP addresses associated with the dns name.
4 The last time-to-live (TTL) duration.
5 The last lookup time.

If during dns resolution the dns name in the query matches any name defined in a dnsNameResolver CR, then the previous information is updated accordingly in the CR status field. For unsuccessful dns wildcard name lookups, the request is retried after a default TTL of 30 minutes.

The OVN-Kubernetes cluster manager watches for updates to an EgressFirewall custom resource object, and creates, modifies, or deletes dnsNameResolver CRs associated with those egress firewall policies when that update occurs.

Do not modify dnsNameResolver custom resources directly. This can lead to unwanted behavior of your egress firewall.

EgressFirewall custom resource (CR) object

You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to.

The following YAML describes an EgressFirewall CR object:

EgressFirewall object
apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
  name: <name> (1)
spec:
  egress: (2)
    ...
1 The name for the object must be default.
2 A collection of one or more egress network policy rules as described in the following section.

EgressFirewall rules

The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format, a domain name, or use the nodeSelector to allow or deny egress traffic. The egress stanza expects an array of one or more objects.

Egress policy rule stanza
egress:
- type: <type> (1)
  to: (2)
    cidrSelector: <cidr> (3)
    dnsName: <dns_name> (4)
    nodeSelector: <label_name>: <label_value> (5)
  ports: (6)
      ...
1 The type of rule. The value must be either Allow or Deny.
2 A stanza describing an egress traffic match rule that specifies the cidrSelector field or the dnsName field. You cannot use both fields in the same rule.
3 An IP address range in CIDR format.
4 A dns domain name.
5 Labels are key/value pairs that the user defines. Labels are attached to objects, such as pods. The nodeSelector allows for one or more node labels to be selected and attached to pods.
6 Optional: A stanza describing a collection of network ports and protocols for the rule.
Ports stanza
ports:
- port: <port> (1)
  protocol: <protocol> (2)
1 A network port, such as 80 or 443. If you specify a value for this field, you must also specify a value for protocol.
2 A network protocol. The value must be either TCP, UDP, or SCTP.

Example EgressFirewall CR objects

The following example defines several egress firewall policy rules:

apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
  name: default
spec:
  egress: (1)
  - type: Allow
    to:
      cidrSelector: 1.2.3.0/24
  - type: Deny
    to:
      cidrSelector: 0.0.0.0/0
1 A collection of egress firewall policy rule objects.

The following example defines a policy rule that denies traffic to the host at the 172.16.1.1 IP address, if the traffic is using either the TCP protocol and destination port 80 or any protocol and destination port 443.

apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
  name: default
spec:
  egress:
  - type: Deny
    to:
      cidrSelector: 172.16.1.1
    ports:
    - port: 80
      protocol: TCP
    - port: 443

Example nodeSelector for EgressFirewall

As a cluster administrator, you can allow or deny egress traffic to nodes in your cluster by specifying a label using nodeSelector. Labels can be applied to one or more nodes. The following is an example with the region=east label:

apiVersion: k8s.ovn.org/v1
kind: EgressFirewall
metadata:
  name: default
spec:
    egress:
    - to:
        nodeSelector:
          matchLabels:
            region: east
      type: Allow

Instead of adding manual rules per node IP address, use node selectors to create a label that allows pods behind an egress firewall to access host network pods.

Creating an egress firewall policy object

As a cluster administrator, you can create an egress firewall policy object for a project.

If the project already has an EgressFirewall object defined, you must edit the existing policy to make changes to the egress firewall rules.

Prerequisites
  • A cluster that uses the OVN-Kubernetes network plugin.

  • Install the OpenShift CLI (oc).

  • You must log in to the cluster as a cluster administrator.

Procedure
  1. Create a policy rule:

    1. Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules.

    2. In the file you created, define an egress policy object.

  2. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to.

    $ oc create -f <policy_name>.yaml -n <project>

    In the following example, a new EgressFirewall object is created in a project named project1:

    $ oc create -f default.yaml -n project1
    Example output
    egressfirewall.k8s.ovn.org/v1 created
  3. Optional: Save the <policy_name>.yaml file so that you can make changes later.