This is a cache of https://docs.openshift.com/container-platform/3.9/install_config/router/f5_router.html. It is a snapshot of the page at 2024-11-29T02:57:59.126+0000.
Using the F5 Router Plug-in - Setting up a Router | Installation and Configuration | OpenShift Container Platform 3.9
×

Overview

The F5 router plug-in is available starting in OpenShift Container Platform 3.0.2.

The F5 router plug-in is provided as a container image and run as a pod, just like the default HAProxy router.

Support relationships between F5 and Red Hat provide a full scope of support for F5 integration. F5 provides support for the F5 BIG-IP® product. Both F5 and Red Hat jointly support the integration with Red Hat OpenShift. While Red Hat helps with bug fixes and feature enhancements, all get communicated to F5 Networks where they are managed as part of their development cycles.

Prerequisites and Supportability

When deploying the F5 router plug-in, ensure you meet the following requirements:

  • A F5 host IP with:

    • Credentials for API access

    • SSH access via a private key

  • An F5 user with Advanced Shell access

  • A virtual server for HTTP routes:

    • HTTP profile must be http.

  • A virtual server with HTTP profile routes:

    • HTTP profile must be http

    • SSL Profile (client) must be clientssl

    • SSL Profile (server) must be serverssl

  • For edge integration (not recommended):

    • A working ramp node

    • A working tunnel to the ramp node

  • For native integration:

    • A host-internal IP capable of communicating with all nodes on the port 4789/UDP

    • The sdn-services add-on license installed on the F5 host.

OpenShift Container Platform supports only the following F5 BIG-IP® versions:

  • 11.x

  • 12.x

The following features are not supported with F5 BIG-IP®:

  • Wildcard routes together with re-encrypt routes - you must supply a certificate and a key in the route. If you provide a certificate, a key, and a certificate authority (CA), the CA is never used.

  • A pool is created for all services, even for the ones with no associated route.

  • Idling applications

  • Unencrypted HTTP traffic in redirect mode, with edge TLS termination. (insecureEdgeTerminationPolicy: Redirect)

  • Sharding, that is, having multiple vservers on the F5.

  • SSL cipher (ROUTER_CIPHERS=modern/old)

  • Customizing the endpoint health checks for time-intervals and the type of checks.

  • Serving F5 metrics by using a metrics server.

  • Specifying a different target port (PreferPort/TargetPort) rather than the ones specified in the service.

  • Customizing the source IP whitelists, that is, allowing traffic for a route only from specific IP addresses.

  • Customizing timeout values, such as max connect time, or tcp FIN timeout.

  • HA mode for the F5 BIG-IP®.

Configuring the Virtual Servers

As a prerequisite to working with the openshift-F5 integrated router, two virtual servers (one virtual server each for HTTP and HTTPS profiles, respectively) need to be set up in the F5 BIG-IP® appliance.

To set up a virtual server in the F5 BIG-IP® appliance, follow the instructions from F5.

While creating the virtual server, ensure the following settings are in place:

  • For the HTTP server, set the ServicePort to 'http'/80.

  • For the HTTPS server, set the ServicePort to 'https'/443.

  • In the basic configuration, set the HTTP profile to /Common/http for both of the virtual servers.

  • For the HTTPS server, create a default client-ssl profile and select it for the SSL Profile (Client).

    • To create the default client SSL profile, follow the instructions from F5, especially the Configuring the fallback (default) client SSL profile section, which discusses that the certificate/key pair is the default that will be served in the case that custom certificates are not provided for a route or server name.

Deploying the F5 Router

The F5 router must be run in privileged mode, because route certificates are copied using the scp command:

$ oc adm policy remove-scc-from-user hostnetwork -z router
$ oc adm policy add-scc-to-user privileged -z router

Deploy the F5 router with the oc adm router command, but provide additional flags (or environment variables) specifying the following parameters for the F5 BIG-IP® host:

Flag Description

--type=f5-router

Specifies that an F5 router should be launched (the default --type is haproxy-router).

--external-host

Specifies the F5 BIG-IP® host’s management interface’s host name or IP address.

--external-host-username

Specifies the F5 BIG-IP® user name (typically admin). The F5 BIG-IP user account must have access to the Advanced Shell (Bash) on the F5 BIG-IP system.

--external-host-password

Specifies the F5 BIG-IP® password.

--external-host-http-vserver

Specifies the name of the F5 virtual server for HTTP connections. This must be configured by the user prior to launching the router pod.

--external-host-https-vserver

Specifies the name of the F5 virtual server for HTTPS connections. This must be configured by the user prior to launching the router pod.

--external-host-private-key

Specifies the path to the SSH private key file for the F5 BIG-IP® host. Required to upload and delete key and certificate files for routes.

--external-host-insecure

A Boolean flag that indicates that the F5 router should skip strict certificate verification with the F5 BIG-IP® host.

--external-host-partition-path

Specifies the F5 BIG-IP® partition path (the default is /Common).

For example:

$ oc adm router \
    --type=f5-router \
    --external-host=10.0.0.2 \
    --external-host-username=admin \
    --external-host-password=mypassword \
    --external-host-http-vserver=ose-vserver \
    --external-host-https-vserver=https-ose-vserver \
    --external-host-private-key=/path/to/key \
    --host-network=false \
    --service-account=router

As with the HAProxy router, the oc adm router command creates the service and deployment configuration objects, and thus the replication controllers and pod(s) in which the F5 router itself runs. The replication controller restarts the F5 router in case of crashes. Because the F5 router is watching routes, endpoints, and nodes and configuring F5 BIG-IP® accordingly, running the F5 router in this way, along with an appropriately configured F5 BIG-IP® deployment, should satisfy high-availability requirements.

F5 Router Partition Paths

Partition paths allow you to store your OpenShift Container Platform routing configuration in a custom F5 BIG-IP® administrative partition, instead of the default /Common partition. You can use custom administrative partitions to secure F5 BIG-IP® environments. This means that an OpenShift Container Platform-specific configuration stored in F5 BIG-IP® system objects reside within a logical container, allowing administrators to define access control policies on that specific administrative partition.

See the F5 BIG-IP® documentation for more information about administrative partitions.

To configure your OpenShift Container Platform for partition paths:

  1. Optionally, perform some cleaning steps:

    1. Ensure F5 is configured to be able to switch to the /Common and /Custom paths.

    2. Delete the static FDB of vxlan5000. See the F5 BIG-IP® documentation for more information.

  2. Configure a virtual server for the custom partition.

  3. Deploy the F5 router using the --external-host-partition-path flag to specify a partition path:

    $ oc adm router --external-host-partition-path=/OpenShift/zone1 ...

Setting Up F5 Native Integration

This section reviews how to set up F5 native integration with OpenShift Container Platform. The concepts of F5 appliance and OpenShift Container Platform connection and data flow of F5 native integration are discussed in the F5 Native Integration section of the Routes topic.

Only F5 BIG-IP® appliance version 12.x and above works with the native integration presented in this section. You also need sdn-services add-on license for the integration to work properly. For version 11.x, follow the instructions to set up a ramp node.

As of OpenShift Container Platform version 3.4, using native integration of F5 with OpenShift Container Platform does not require configuring a ramp node for F5 to be able to reach the pods on the overlay network as created by OpenShift SDN.

The F5 controller pod needs to be launched with enough information so that it can successfully directly connect to pods.

  1. Create a ghost hostsubnet on the OpenShift Container Platform cluster:

    $ cat > f5-hostsubnet.yaml << EOF
    {
        "kind": "HostSubnet",
        "apiVersion": "v1",
        "metadata": {
            "name": "openshift-f5-node",
            "annotations": {
            "pod.network.openshift.io/assign-subnet": "true",
    	"pod.network.openshift.io/fixed-vnid-host": "0"  (1)
            }
        },
        "host": "openshift-f5-node",
        "hostIP": "10.3.89.213"  (2)
    } EOF
    $ oc create -f f5-hostsubnet.yaml
    1 Make F5 global.
    2 The internal IP of the F5 appliance.
  2. Determine the subnet allocated for the ghost hostsubnet just created:

    $ oc get hostsubnets
    NAME                    HOST                    HOST IP       SUBNET
    openshift-f5-node       openshift-f5-node       10.3.89.213   10.131.0.0/23
    openshift-master-node   openshift-master-node   172.17.0.2    10.129.0.0/23
    openshift-node-1        openshift-node-1        172.17.0.3    10.128.0.0/23
    openshift-node-2        openshift-node-2        172.17.0.4    10.130.0.0/23
  3. Check the SUBNET for the newly created hostsubnet. In this example, 10.131.0.0/23.

  4. Get the entire pod network’s CIDR:

    $ oc get clusternetwork

    This value will be something like 10.128.0.0/14, noting the mask (14 in this example).

  5. To construct the gateway address, pick any IP address from the hostsubnet (for example, 10.131.0.5). Use the mask of the pod network (14). The gateway address becomes: 10.131.0.5/14.

  6. Launch the F5 controller pod, following these instructions. Additionally, allow the access to 'node' cluster resource for the service account and use the two new additional options for VXLAN native integration.

    $ # Add policy to allow router to access nodes using the sdn-reader role
    $ oc adm policy add-cluster-role-to-user system:sdn-reader system:serviceaccount:default:router
    $ # Launch the router pod with vxlan-gw and F5's internal IP as extra arguments
    $ #--external-host-internal-ip=10.3.89.213
    $ #--external-host-vxlan-gw=10.131.0.5/14
    $ oc adm router \
        --type=f5-router \
        --external-host=10.3.89.90 \
        --external-host-username=admin \
        --external-host-password=mypassword \
        --external-host-http-vserver=ose-vserver \
        --external-host-https-vserver=https-ose-vserver \
        --external-host-private-key=/path/to/key \
        --service-account=router \
        --host-network=false \
        --external-host-internal-ip=10.3.89.213 \
        --external-host-vxlan-gw=10.131.0.5/14

    The external-host-username is a F5 BIG-IP user account with access to the Advanced Shell (Bash) on the F5 BIG-IP system.

The F5 setup is now ready, without the need to set up the ramp node.