This is a cache of https://docs.openshift.com/container-platform/3.3/install_config/router/default_haproxy_router.html. It is a snapshot of the page at 2024-11-24T04:11:58.990+0000.
Using the Default HAProxy Router - Setting up a Router | Installation and Configuration | OpenShift Container Platform 3.3
×

Overview

The oadm router command is provided with the administrator CLI to simplify the tasks of setting up routers in a new installation. If you followed the quick installation, then a default router was automatically created for you. The oadm router command creates the service and deployment configuration objects. Just about every form of communication between OpenShift Container Platform components is secured by TLS and uses various certificates and authentication methods. Use the --service-account option to specify the service account the router will use to contact the master.

Routers directly attach to port 80 and 443 on all interfaces on a host. Restrict routers to hosts where port 80/443 is available and not being consumed by another service, and set this using node selectors and the scheduler configuration. As an example, you can achieve this by dedicating infrastructure nodes to run services such as routers.

It is recommended to use separate distinct openshift-router service account with your router. This can be provided using the --service-account flag to the oadm router command.

$ oadm router --dry-run --service-account=router (1)
1 --service-account is the name of a service account for the openshift-router.

Router pods created using oadm router have default resource requests that a node must satisfy for the router pod to be deployed. In an effort to increase the reliability of infrastructure components, the default resource requests are used to increase the QoS tier of the router pods above pods without resource requests. The default values represent the observed minimum resources required for a basic router to be deployed and can be edited in the routers deployment configuration and you may want to increase them based on the load of the router.

Creating a Router

The quick installation process automatically creates a default router. If the router does not exist, run the following to create a router:

$ oadm router <router_name> --replicas=<number> --service-account=router

You can also use router shards to ensure that the router is filtered to specific namespaces or routes, or set any environment variables after router creation.

Other Basic Router Commands

Checking the Default Router

The default router service account, named router, is automatically created during quick and advanced installations. To verify that this account already exists:

$ oadm router --dry-run --service-account=router
Viewing the Default Router

To see what the default router would look like if created:

$ oadm router -o yaml --service-account=router
Deploying the Router to a Labeled Node

To deploy the router to any node(s) that match a specified node label:

$ oadm router <router_name> --replicas=<number> --selector=<label> \
    --service-account=router

For example, if you want to create a router named router and have it placed on a node labeled with region=infra:

$ oadm router router --replicas=1 --selector='region=infra' \
  --service-account=router

During advanced installation, the openshift_hosted_router_selector and openshift_registry_selector Ansible settings are set to region=infra by default. The default router and registry will only be automatically deployed if a node exists that matches the region=infra label.

Multiple instances are created on different hosts according to the scheduler policy.

Using a Different Router Image

To use a different router image and view the router configuration that would be used:

$ oadm router <router_name> -o <format> --images=<image> \
    --service-account=router

For example:

$ oadm router region-west -o yaml --images=myrepo/somerouter:mytag \
    --service-account=router

Filtering Routes to Specific Routers

Using the ROUTE_LABELS environment variable, you can filter routes so that they are used only by specific routers.

For example, if you have multiple routers, and 100 routes, you can attach labels to the routes so that a portion of them are handled by one router, whereas the rest are handled by another.

  1. After creating a router, use the ROUTE_LABELS environment variable to tag the router:

    $ oc env dc/<router=name>  ROUTE_LABELS="key=value"
  2. Add the label to the desired routes:

    oc label route <route=name> key=value
  3. To verify that the label has been attached to the route, check the route configuration:

    $ oc describe dc/<route_name>

Highly-Available Routers

You can set up a highly-available router on your OpenShift Container Platform cluster using IP failover.

Customizing the Router Service Ports

You can customize the service ports that a template router binds to by setting the environment variables ROUTER_SERVICE_HTTP_PORT and ROUTER_SERVICE_HTTPS_PORT. This can be done by creating a template router, then editing its deployment configuration.

The following example creates a router deployment with 0 replicas and customizes the router service HTTP and HTTPS ports, then scales it appropriately (to 1 replica).

$ oadm router --replicas=0 --ports='10080:10080,10443:10443' (1)
$ oc set env dc/router ROUTER_SERVICE_HTTP_PORT=10080  \
                   ROUTER_SERVICE_HTTPS_PORT=10443
$ oc scale dc/router --replicas=1
1 Ensures exposed ports are appropriately set for routers that use the container networking mode --host-network=false.

If you do customize the template router service ports, you will also need to ensure that the nodes where the router pods run have those custom ports opened in the firewall (either via Ansible or iptables, or any other custom method that you use via firewall-cmd).

The following is an example using iptables to open the custom router service ports.

$ iptables -A INPUT -p tcp --dport 10080 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 10443 -j ACCEPT

Working With Multiple Routers

An administrator can create multiple routers with the same definition to serve the same set of routes. By having different groups of routers with different namespace or route selectors, they can vary the routes that the router serves.

Multiple routers can be grouped to distribute routing load in the cluster and separate tenants to different routers or shards. Each router or shard in the group handles routes based on the selectors in the router. An administrator can create shards over the whole cluster using ROUTE_LABELS. A user can create shards over a namespace (project) by using NAMESPACE_LABELS.

Adding a Node Selector to a Deployment Configuration

Making specific routers deploy on specific nodes requires two steps:

  1. Add a label to the desired node:

    $ oc label node 10.254.254.28 "router=first"
  2. Add a node selector to the router deployment configuration:

    $ oc edit dc <deploymentConfigName>

    Add the template.spec.nodeSelector field with a key and value corresponding to the label:

    ...
      template:
        metadata:
          creationTimestamp: null
          labels:
            router: router1
        spec:
          nodeSelector:      (1)
            router: "first"
    ...
    1 The key and value are router and first, respectively, corresponding to the router=first label.

Using Router Shards

The access controls are based on the service account that the router is run with.

Using NAMESPACE_LABELS and/or ROUTE_LABELS, a router can filter out the namespaces and/or routes that it should service. This enables you to partition routes amongst multiple router deployments effectively distributing the set of routes.

Router Sharding Based on Namespace Labels
Figure 1. Router Sharding Based on Namespace Labels

Example: A router deployment finops-router is run with route selector NAMESPACE_LABELS="name in (finance, ops)" and a router deployment dev-router is run with route selector NAMESPACE_LABELS="name=dev".

If all routes are in the three namespaces finance, ops or dev, then this could effectively distribute your routes across two router deployments.

In the above scenario, sharding becomes a special case of partitioning with no overlapping sets. Routes are divided amongst multiple router shards.

The criteria for route selection governs how the routes are distributed. It is possible to have routes that overlap across multiple router deployments.

Example: In addition to the finops-router and dev-router in the example above, you also have devops-router, which is run with a route selector NAMESPACE_LABELS="name in (dev, ops)".

The routes in namespaces dev or ops now are serviced by two different router deployments. This becomes a case in which you have partitioned the routes with an overlapping set.

In addition, this enables you to create more complex routing rules, allowing the diversion of high priority traffic to the dedicated finops-router, but sending the lower priority ones to the devops-router.

NAMESPACE_LABELS allows filtering of the projects to service and selecting all the routes from those projects, but you may want to partition routes based on other criteria in the routes themselves. The ROUTE_LABELS selector allows you to slice-and-dice the routes themselves.

Example: A router deployment prod-router is run with route selector ROUTE_LABELS="mydeployment=prod" and a router deployment devtest-router is run with route selector ROUTE_LABELS="mydeployment in (dev, test)".

The example assumes you have all the routes you want to be serviced tagged with a label "mydeployment=<tag>".

Router Sharding Based on Namespace Names
Figure 2. Router Sharding Based on Namespace Names

Creating Router Shards

Router sharding lets you select how routes are distributed among a set of routers.

Router sharding is based on labels; you set labels on the routes in the pool, and express the desired subset of those routes for the router to serve with a selection expression via the oc set env command.

First, ensure that service account associated with the router has the cluster reader permission.

The rest of this section describes an extended example. Suppose there are 26 routes, named a — z, in the pool, with various labels:

Possible labels on routes in the pool
sla=high       geo=east     hw=modest     dept=finance
sla=medium     geo=west     hw=strong     dept=dev
sla=low                                   dept=ops

These labels express the concepts: service level agreement, geographical location, hardware requirements, and department. The routes in the pool can have at most one label from each column. Some routes may have other labels, entirely, or none at all.

Name(s) SLA Geo HW Dept Other Labels

a

high

east

modest

finance

type=static

b

west

strong

type=dynamic

c, d, e

low

modest

type=static

g — k

medium

strong

dev

l — s

high

modest

ops

t — z

west

type=dynamic

Here is a convenience script mkshard that ilustrates how oadm router, oc set env, and oc scale work together to make a router shard.

#!/bin/bash
# Usage: mkshard ID SELECTION-EXPRESSION
id=$1
sel="$2"
router=router-shard-$id           (1)
oadm router $router --replicas=0  (2)
dc=dc/router-shard-$id            (3)
oc set env   $dc ROUTE_LABELS="$sel"  (4)
oc scale $dc --replicas=3         (5)
1 The created router has name router-shard-<id>.
2 Specify no scaling for now.
3 The deployment configuration for the router.
4 Set the selection expression using oc set env. The selection expression is the value of the ROUTE_LABELS environment variable.
5 Scale it up.

Running mkshard several times creates several routers:

Router Selection Expression Routes

router-shard-1

sla=high

a, l — s

router-shard-2

geo=west

b, t — z

router-shard-3

dept=dev

g — k

Modifying Router Shards

Because a router shard is a construct based on labels, you can modify either the labels (via oc label) or the selection expression.

This section extends the example started in the Creating Router Shards section, demonstrating how to change the selection expression.

Here is a convenience script modshard that modifies an existing router to use a new selection expression:

#!/bin/bash
# Usage: modshard ID SELECTION-EXPRESSION...
id=$1
shift
router=router-shard-$id       (1)
dc=dc/$router                 (2)
oc scale $dc --replicas=0     (3)
oc set env   $dc "$@"             (4)
oc scale $dc --replicas=3     (5)
1 The modified router has name router-shard-<id>.
2 The deployment configuration where the modifications occur.
3 Scale it down.
4 Set the new selection expression using oc set env. Unlike mkshard from the Creating Router Shards section, the selection expression specified as the non-ID arguments to modshard must include the environment variable name as well as its value.
5 Scale it back up.

In modshard, the oc scale commands are not necessary if the deployment strategy for router-shard-<id> is Rolling.

For example, to expand the department for router-shard-3 to include ops as well as dev:

$ modshard 3 ROUTE_LABELS='dept in (dev, ops)'

The result is that router-shard-3 now selects routes g — s (the combined sets of g — k and l — s).

This example takes into account that there are only three departments in this example scenario, and specifies a department to leave out of the shard, thus achieving the same result as the preceding example:

$ modshard 3 ROUTE_LABELS='dept != finanace'

This example specifies shows three comma-separated qualities, and results in only route b being selected:

$ modshard 3 ROUTE_LABELS='hw=strong,type=dynamic,geo=west'

Similarly to ROUTE_LABELS, which involve a route’s labels, you can select routes based on the labels of the route’s namespace labels, with the NAMESPACE_LABELS environment variable. This example modifies router-shard-3 to serve routes whose namespace has the label frequency=weekly:

$ modshard 3 NAMESPACE_LABELS='frequency=weekly'

The last example combines ROUTE_LABELS and NAMESPACE_LABELS to select routes with label sla=low and whose namespace has the label frequency=weekly:

$ modshard 3 \
    NAMESPACE_LABELS='frequency=weekly' \
    ROUTE_LABELS='sla=low'

Using Namespace Router Shards

The routes for a project can be handled by a selected router by using NAMESPACE_LABELS. The router is given a selector for a NAMESPACE_LABELS label and the project that wants to use the router applies the NAMESPACE_LABELS label to its namespace.

First, ensure that service account associated with the router has the cluster reader permission. This permits the router to read the labels that are applied to the namespaces.

Now create and label the router:

$ oadm router ...  --service-account=router
$ oc set env dc/router NAMESPACE_LABELS="router=r1"

Because the router has a selector for a namespace, the router will handle routes for that namespace. So, for example:

$ oc label namespace default "router=r1"

Now create routes in the default namespace, and the route is available in the default router:

$ oc create -f route1.yaml

Now create a new project (namespace) and create a route, route2.

$ oc new-project p1
$ oc create -f route2.yaml

And notice the route is not available in your router. Now label namespace p1 with "router=r1"

$ oc label namespace p1 "router=r1"

Which makes the route available to the router.

Note that removing the label from the namespace won’t have immediate effect (as we don’t see the updates in the router), so if you redeploy/start a new router pod, you should see the unlabelled effects.

$ oc scale dc/router --replicas=0 && oc scale dc/router --replicas=1

Customizing the Default Routing Subdomain

You can customize the suffix used as the default routing subdomain for your environment by modifying the master configuration file (the /etc/origin/master/master-config.yaml file by default). Routes that do not specify a host name would have one generated using this default routing subdomain.

The following example shows how you can set the configured suffix to v3.openshift.test:

routingConfig:
  subdomain: v3.openshift.test

This change requires a restart of the master if it is running.

With the OpenShift Container Platform master(s) running the above configuration, the generated host name for the example of a route named no-route-hostname without a host name added to a namespace mynamespace would be:

no-route-hostname-mynamespace.v3.openshift.test

Forcing Route Host Names to a Custom Routing Subdomain

If an administrator wants to restrict all routes to a specific routing subdomain, they can pass the --force-subdomain option to the oadm router command. This forces the router to override any host names specified in a route and generate one based on the template provided to the --force-subdomain option.

The following example runs a router, which overrides the route host names using a custom subdomain template ${name}-${namespace}.apps.example.com.

$ oadm router --force-subdomain='${name}-${namespace}.apps.example.com'

Using Wildcard certificates

A TLS-enabled route that does not include a certificate uses the router’s default certificate instead. In most cases, this certificate should be provided by a trusted certificate authority, but for convenience you can use the OpenShift Container Platform CA to create the certificate. For example:

$ CA=/etc/origin/master
$ oadm ca create-server-cert --signer-cert=$CA/ca.crt \
      --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \
      --hostnames='*.cloudapps.example.com' \
      --cert=cloudapps.crt --key=cloudapps.key

The router expects the certificate and key to be in PEM format in a single file:

$ cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem

From there you can use the --default-cert flag:

$ oadm router --default-cert=cloudapps.router.pem --service-account=router

Browsers only consider wildcards valid for subdomains one level deep. So in this example, the certificate would be valid for a.cloudapps.example.com but not for a.b.cloudapps.example.com.

Manually Redeploy certificates

To manually redeploy the router certificates:

  1. Check to see if a secret containing the default router certificate was added to the router:

    $ oc volumes dc/router
    
    deploymentconfigs/router
      secret/router-certs as server-certificate
        mounted at /etc/pki/tls/private

    If the certificate is added, skip the following step and overwrite the secret.

  2. Make sure that you have a default certificate directory set for the following variable DEFAULT_certificate_DIR:

    $ oc env dc/router --list
    
    DEFAULT_certificate_DIR=/etc/pki/tls/private

    If not, create the directory using the following command:

    $ oc env dc/router DEFAULT_certificate_DIR=/etc/pki/tls/private
  3. Export the certificate to PEM format:

    $ cat custom-router.key custom-router.crt custom-ca.crt > custom-router.crt
  4. Overwrite or create a router certificate secret:

    If the certificate secret was added to the router, overwrite the secret. If not, create a new secret.

    To overwrite the secret, run the following command:

    $ oc secrets new router-certs tls.crt=custom-router.crt tls.key=custom-router.key -o json --type='kubernetes.io/tls' --confirm | oc replace -f -

    To create a new secret, run the following commands:

    $ oc secrets new router-certs tls.crt=custom-router.crt tls.key=custom-router.key --type='kubernetes.io/tls' --confirm
    
    $ oc volume dc/router --add --mount-path=/etc/pki/tls/private --secret-name='router-certs' --name router-certs
  5. Deploy the router.

    $ oc deploy router --latest

Using Secured Routes

Currently, password protected key files are not supported. HAProxy prompts for a password upon starting and does not have a way to automate this process. To remove a passphrase from a keyfile, you can run:

# openssl rsa -in <passwordProtectedKey.key> -out <new.key>

Here is an example of how to use a secure edge terminated route with TLS termination occurring on the router before traffic is proxied to the destination. The secure edge terminated route specifies the TLS certificate and key information. The TLS certificate is served by the router front end.

First, start up a router instance:

# oadm router --replicas=1 --service-account=router

Next, create a private key, csr and certificate for our edge secured route. The instructions on how to do that would be specific to your certificate authority and provider. For a simple self-signed certificate for a domain named www.example.test, see the example shown below:

# sudo openssl genrsa -out example-test.key 2048
#
# sudo openssl req -new -key example-test.key -out example-test.csr  \
  -subj "/C=US/ST=CA/L=Mountain View/O=OS3/OU=Eng/CN=www.example.test"
#
# sudo openssl x509 -req -days 366 -in example-test.csr  \
      -signkey example-test.key -out example-test.crt

Generate a route using the above certificate and key.

$ oc create route edge --service=my-service \
    --hostname=www.example.test \
    --key=example-test.key --cert=example-test.crt
route "my-service" created

Look at its definition.

$ oc get route/my-service -o yaml
apiVersion: v1
kind: Route
metadata:
  name:  my-service
spec:
  host: www.example.test
  to:
    kind: Service
    name: my-service
  tls:
    termination: edge
    key: |
      -----BEGIN PRIVATE KEY-----
      [...]
      -----END PRIVATE KEY-----
    certificate: |
      -----BEGIN certificate-----
      [...]
      -----END certificate-----

Make sure your DNS entry for www.example.test points to your router instance(s) and the route to your domain should be available. The example below uses curl along with a local resolver to simulate the DNS lookup:

# routerip="4.1.1.1"  #  replace with IP address of one of your router instances.
# curl -k --resolve www.example.test:443:$routerip https://www.example.test/

Using the Container Network Stack

The OpenShift Container Platform router runs inside a container and the default behavior is to use the network stack of the host (i.e., the node where the router container runs). This default behavior benefits performance because network traffic from remote clients does not need to take multiple hops through user space to reach the target service and container.

Additionally, this default behavior enables the router to get the actual source IP address of the remote connection rather than getting the node’s IP address. This is useful for defining ingress rules based on the originating IP, supporting sticky sessions, and monitoring traffic, among other uses.

This host network behavior is controlled by the --host-network router command line option, and the default behaviour is the equivalent of using --host-network=true. If you wish to run the router with the container network stack, use the --host-network=false option when creating the router. For example:

$ oadm router --service-account=router --host-network=false

Internally, this means the router container must publish the 80 and 443 ports in order for the external network to communicate with the router.

Running with the container network stack means that the router sees the source IP address of a connection to be the NATed IP address of the node, rather than the actual remote IP address.

On OpenShift Container Platform clusters using multi-tenant network isolation, routers on a non-default namespace with the --host-network=false option will load all routes in the cluster, but routes across the namespaces will not be reachable due to network isolation. With the --host-network=true option, routes bypass the container network and it can access any pod in the cluster. If isolation is needed in this case, then do not add routes across the namespaces.

Exposing Router Metrics

Using the --metrics-image and --expose-metrics options, you can configure the OpenShift Container Platform router to run a sidecar container that exposes or publishes router metrics for consumption by external metrics collection and aggregation systems (e.g. Prometheus, statsd).

Depending on your router implementation, the image is appropriately set up and the metrics sidecar container is started when the router is deployed. For example, the HAProxy-based router implementation defaults to using the prom/haproxy-exporter image to run as a sidecar container, which can then be used as a metrics datasource by the Prometheus server.

The --metrics-image option overrides the defaults for HAProxy-based router implementations and, in the case of custom implementations, enables the image to use for a custom metrics exporter or publisher.

  1. Grab the HAProxy Prometheus exporter image from the Docker registry:

    $ sudo docker pull prom/haproxy-exporter
  2. Create the OpenShift Container Platform router:

    $ oadm router --service-account=router --expose-metrics

    Or, optionally, use the --metrics-image option to override the HAProxy defaults:

    $ oadm router --service-account=router --expose-metrics \
        --metrics-image=prom/haproxy-exporter
  3. Once the haproxy-exporter containers (and your HAProxy router) have started, point Prometheus to the sidecar container on port 9101 on the node where the haproxy-exporter container is running:

    $ haproxy_exporter_ip="<enter-ip-address-or-hostname>"
    $ cat > haproxy-scraper.yml  <<CFGEOF
    ---
    global:
      scrape_interval: "60s"
      scrape_timeout:  "10s"
      # external_labels:
        # source: openshift-router
    
    scrape_configs:
      - job_name:  "haproxy"
        target_groups:
          - targets:
            - "${haproxy_exporter_ip}:9101"
    CFGEOF
    
    $ #  And start prometheus as you would normally using the above config file.
    $ echo "  - Example:  prometheus -config.file=haproxy-scraper.yml "
    $ echo "              or you can start it as a container on {product-title}!!
    
    $ echo "  - Once the prometheus server is up, view the {product-title} HAProxy "
    $ echo "    router metrics at: http://<ip>:9090/consoles/haproxy.html "

Preventing Connection Failures During Restarts

If you connect to the router while the proxy is reloading, there is a small chance that your connection will end up in the wrong network queue and be dropped. The issue is being addressed. In the meantime, it is possible to work around the problem by installing iptables rules to prevent connections during the reload window. However, doing so means that the router needs to run with elevated privilege so that it can manipulate iptables on the host. It also means that connections that happen during the reload are temporarily ignored and must retransmit their connection start, lengthening the time it takes to connect, but preventing connection failure.

To prevent this, configure the router to use iptables by changing the service account, and setting an environment variable on the router.

Use a Privileged SCC

When creating the router, allow it to use the privileged SCC. This gives the router user the ability to create containers with root privileges on the nodes:

$ oadm policy add-scc-to-user privileged -z router

Patch the Router Deployment Configuration to Create a Privileged Container

You can now create privileged containers. Next, configure the router deployment configuration to use the privilege so that the router can set the iptables rules it needs. This patch changes the router deployment configuration so that the container that is created runs as privileged (and therefore gets correct capabilities) and run as root:

$ oc patch dc router -p '{"spec":{"template":{"spec":{"containers":[{"name":"router","securityContext":{"privileged":true}}],"securityContext":{"runAsUser": 0}}}}}'

Configure the Router to Use iptables

Set the option on the router deployment configuration:

$ oc set env dc/router -c router DROP_SYN_DURING_RESTART=true

If you used a non-default name for the router, you must change dc/router accordingly.

Protecting Against DDoS Attacks

Add timeout http-request to the default HAProxy router image to protect the deployment against distributed denial-of-service (DDoS) attacks (for example, slowloris):

# and the haproxy stats socket is available at /var/run/haproxy.stats
global
  stats socket ./haproxy.stats level admin

defaults
  option http-server-close
  mode http
  timeout http-request 5s
  timeout connect 5s (1)
  timeout server 10s
  timeout client 30s
1 timeout http-request is set up to 5 seconds. HAProxy gives a client 5 seconds *to send its whole HTTP request. Otherwise, HAProxy shuts the connection with *an error.

Also, when the environment variable ROUTER_SLOWLORIS_TIMEOUT is set, it limits the amount of time a client has to send the whole HTTP request. Otherwise, HAProxy will shut down the connection.

Setting the environment variable allows information to be captured as part of the router’s deployment configuration and does not require manual modification of the template, whereas manually adding the HAProxy setting requires you to rebuild the router pod and maintain your router template file.

Using annotations implements basic DDoS protections in the HAProxy template router, including the ability to limit the:

  • number of concurrent TCP connections

  • rate at which a client can request TCP connections

  • rate at which HTTP requests can be made

These are enabled on a per route basis because applications can have extremely different traffic patterns.

Table 1. HAProxy Template Router Settings
Setting Description

haproxy.router.openshift.io/rate-limit-connections

Enables the settings be configured (when set to true, for example).

haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp

The number of concurrent TCP connections that can be made by the same IP address on this route.

haproxy.router.openshift.io/rate-limit-connections.rate-tcp

The number of TCP connections that can be opened by a client IP.

haproxy.router.openshift.io/rate-limit-connections.rate-http

The number of HTTP requests that a client IP can make in a 3-second period.