This is a cache of https://docs.openshift.com/container-platform/3.4/dev_guide/routes.html. It is a snapshot of the page at 2024-11-27T04:10:18.060+0000.
Routes | Developer Guide | OpenShift Container Platform 3.4
×

Overview

An OpenShift Container Platform route exposes a service at a host name, like www.example.com, so that external clients can reach it by name.

DNS resolution for a host name is handled separately from routing; your administrator may have configured a cloud domain that will always correctly resolve to the OpenShift Container Platform router, or if using an unrelated host name you may need to modify its DNS records independently to resolve to the router.

Creating Routes

You can create unsecured and secured routes using the web console or the CLI.

Using the web console, you can navigate to the Browse → Routes page, then click Create Route to define and create a route in your project:

Creating a Route Using the Web Console
Figure 1. Creating a Route Using the Web Console

Using the CLI, the following example creates an unsecured route:

$ oc expose svc/frontend --hostname=www.example.com

The new route inherits the name from the service unless you specify one using the --name option.

YAML Definition of the Unsecured Route Created Above
apiVersion: v1
kind: Route
metadata:
  name: frontend
spec:
  host: www.example.com
  path: "/test" (1)
  to:
    kind: service
    name: frontend
1 For path-based routing, specify a path component that can be compared against a URL.

For information on configuring routes using the CLI, see Route Types.

Unsecured routes are the default configuration, and are therefore the simplest to set up. However, secured routes offer security for connections to remain private. To create a secured HTTPS route encrypted with a key and certificate (PEM-format files which you must generate and sign separately), you can use the create route command and optionally provide certificates and a key.

TLS is the replacement of SSL for HTTPS and other encrypted protocols.

$ oc create route edge --service=frontend \
    --cert=${MASTER_CONFIG_DIR}/ca.crt \
    --key=${MASTER_CONFIG_DIR}/ca.key \
    --ca-cert=${MASTER_CONFIG_DIR}/ca.crt \
    --hostname=www.example.com
YAML Definition of the Secured Route Created Above
apiVersion: v1
kind: Route
metadata:
  name: frontend
spec:
  host: www.example.com
  to:
    kind: service
    name: frontend
  tls:
    termination: edge
    key: |-
      -----BEGIN PRIVATE KEY-----
      [...]
      -----END PRIVATE KEY-----
    certificate: |-
      -----BEGIN CERTIFICATE-----
      [...]
      -----END CERTIFICATE-----
    caCertificate: |-
      -----BEGIN CERTIFICATE-----
      [...]
      -----END CERTIFICATE-----

Currently, password protected key files are not supported. HAProxy prompts for a password upon starting and does not have a way to automate this process. To remove a passphrase from a keyfile, you can run:

# openssl rsa -in <passwordProtectedKey.key> -out <new.key>

You can create a secured route without specifying a key and certificate, in which case the router’s default certificate will be used for TLS termination.

TLS termination in OpenShift Container Platform relies on SNI for serving custom certificates. Any non-SNI traffic received on port 443 is handled with TLS termination and a default certificate, which may not match the requested host name, resulting in validation errors.

Further information on all types of TLS termination as well as path-based routing are available in the Architecture section.

Load Balancing for A/B Testing

You can run two versions of an application, and, entirely within OpenShift Container Platform, control the percentage of traffic to and from each application for A/B testing. A/B testing is a method of comparing two versions of an application against each other to determine which one performs better.

Previously, A/B testing only worked by adding or removing more pods of every kind (A or B). However, this was not a scalable solution because for lower B percentages, you would create a large number of pods. Starting in 3.3, the HAProxy router now supports splitting the traffic coming to a route across multiple back end services via weighting.

The web console allows users to set the weighting and show balance between them:

Visualization of Alternate Back Ends in the Web Console

If you have two deployments, A and B, or more, then create respective services for the pods in those deployments and use labels.

The Route resource now has an alternateBackends field, which you can use to specify service. Use the alternateBackends and To fields to supply the route with all of the back end deployments grouped as services. Use the weight sub-field to specify a relative weight in integers ranging from 0 to 256. This value defaults to 100. The combined value of all the weights sets the relative proportions of traffic.

When you deploy the route, the router will balance the traffic according to the weights specified for the services.

To edit the route, run:

$ oc edit route <route-name>

Then, update the percentage/weight of the services in the to and alternateBackends fields.