This is a cache of https://docs.openshift.com/container-platform/4.15/networking/ovn_kubernetes_network_provider/configuring-egress-traffic-for-vrf-loadbalancer-services.html. It is a snapshot of the page at 2024-12-03T12:31:58.916+0000.
Configuring an <strong>egress</strong> service - OVN-Kubernetes network plugin | Networking | OpenShift Container Platform 4.15
×

As a cluster administrator, you can configure egress traffic for pods behind a load balancer service by using an egress service.

egress service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can use the egressService custom resource (CR) to manage egress traffic in the following ways:

  • Assign a load balancer service IP address as the source IP address for egress traffic for pods behind the load balancer service.

    Assigning the load balancer IP address as the source IP address in this context is useful to present a single point of egress and ingress. For example, in some scenarios, an external system communicating with an application behind a load balancer service can expect the source and destination IP address for the application to be the same.

    When you assign the load balancer service IP address to egress traffic for pods behind the service, OVN-Kubernetes restricts the ingress and egress point to a single node. This limits the load balancing of traffic that MetalLB typically provides.

  • Assign the egress traffic for pods behind a load balancer to a different network than the default node network.

    This is useful to assign the egress traffic for applications behind a load balancer to a different network than the default network. Typically, the different network is implemented by using a VRF instance associated with a network interface.

egress service custom resource

Define the configuration for an egress service in an egressService custom resource. The following YAML describes the fields for the configuration of an egress service:

apiVersion: k8s.ovn.org/v1
kind: egressService
metadata:
  name: <egress_service_name> (1)
  namespace: <namespace> (2)
spec:
  sourceIPBy: <egress_traffic_ip> (3)
  nodeSelector: (4)
    matchLabels:
      node-role.kubernetes.io/<role>: ""
  network: <egress_traffic_network> (5)
1 Specify the name for the egress service. The name of the egressService resource must match the name of the load-balancer service that you want to modify.
2 Specify the namespace for the egress service. The namespace for the egressService must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped.
3 Specify the source IP address of egress traffic for pods behind a service. Valid values are LoadBalancerIP or Network. Use the LoadBalancerIP value to assign the LoadBalancer service ingress IP address as the source IP address for egress traffic. Specify Network to assign the network interface IP address as the source IP address for egress traffic.
4 Optional: If you use the LoadBalancerIP value for the sourceIPBy specification, a single node handles the LoadBalancer service traffic. Use the nodeSelector field to limit which node can be assigned this task. When a node is selected to handle the service traffic, OVN-Kubernetes labels the node in the following format: egress-service.k8s.ovn.org/<svc-namespace>-<svc-name>: "". When the nodeSelector field is not specified, any node can manage the LoadBalancer service traffic.
5 Optional: Specify the routing table for egress traffic. If you do not include the network specification, the egress service uses the default host network.
Example egress service specification
apiVersion: k8s.ovn.org/v1
kind: egressService
metadata:
  name: test-egress-service
  namespace: test-namespace
spec:
  sourceIPBy: "LoadBalancerIP"
  nodeSelector:
    matchLabels:
      vrf: "true"
  network: "2"

Deploying an egress service

You can deploy an egress service to manage egress traffic for pods behind a LoadBalancer service.

The following example configures the egress traffic to have the same source IP address as the ingress IP address of the LoadBalancer service.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • You configured MetalLB BGPPeer resources.

Procedure
  1. Create an IPAddressPool CR with the desired IP for the service:

    1. Create a file, such as ip-addr-pool.yaml, with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        name: example-pool
        namespace: metallb-system
      spec:
        addresses:
        - 172.19.0.100/32
    2. Apply the configuration for the IP address pool by running the following command:

      $ oc apply -f ip-addr-pool.yaml
  2. Create Service and egressService CRs:

    1. Create a file, such as service-egress-service.yaml, with content like the following example:

      apiVersion: v1
      kind: Service
      metadata:
        name: example-service
        namespace: example-namespace
        annotations:
          metallb.universe.tf/address-pool: example-pool (1)
      spec:
        selector:
          app: example
        ports:
          - name: http
            protocol: TCP
            port: 8080
            targetPort: 8080
        type: LoadBalancer
      ---
      apiVersion: k8s.ovn.org/v1
      kind: egressService
      metadata:
        name: example-service
        namespace: example-namespace
      spec:
        sourceIPBy: "LoadBalancerIP" (2)
        nodeSelector: (3)
          matchLabels:
            node-role.kubernetes.io/worker: ""
      1 The LoadBalancer service uses the IP address assigned by MetalLB from the example-pool IP address pool.
      2 This example uses the LoadBalancerIP value to assign the ingress IP address of the LoadBalancer service as the source IP address of egress traffic.
      3 When you specify the LoadBalancerIP value, a single node handles the LoadBalancer service’s traffic. In this example, only nodes with the worker label can be selected to handle the traffic. When a node is selected, OVN-Kubernetes labels the node in the following format egress-service.k8s.ovn.org/<svc-namespace>-<svc-name>: "".

      If you use the sourceIPBy: "LoadBalancerIP" setting, you must specify the load-balancer node in the BGPAdvertisement custom resource (CR).

    2. Apply the configuration for the service and egress service by running the following command:

      $ oc apply -f service-egress-service.yaml
  3. Create a BGPAdvertisement CR to advertise the service:

    1. Create a file, such as service-bgp-advertisement.yaml, with content like the following example:

      apiVersion: metallb.io/v1beta1
      kind: BGPAdvertisement
      metadata:
        name: example-bgp-adv
        namespace: metallb-system
      spec:
        ipAddressPools:
        - example-pool
        nodeSelector:
        - matchLabels:
            egress-service.k8s.ovn.org/example-namespace-example-service: "" (1)
      1 In this example, the egressService CR configures the source IP address for egress traffic to use the load-balancer service IP address. Therefore, you must specify the load-balancer node for return traffic to use the same return path for the traffic originating from the pod.
Verification
  1. Verify that you can access the application endpoint of the pods running behind the MetalLB service by running the following command:

    $ curl <external_ip_address>:<port_number> (1)
    1 Update the external IP address and port number to suit your application endpoint.
  2. If you assigned the LoadBalancer service’s ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as tcpdump to analyze packets received at the external client.