This is a cache of https://docs.openshift.com/container-platform/3.10/dev_guide/expose_service/expose_internal_ip_load_balancer.html. It is a snapshot of the page at 2024-11-05T02:38:23.347+0000.
Using a Load Balancer - Getting Traffic into a Cluster | Developer Guide | OpenShift Container Platform 3.10
×

Overview

If you do not need a specific external IP address, you can configure a load balancer service to allow external access to an OpenShift Container Platform cluster.

A load balancer service allocates a unique IP from a configured pool. The load balancer has a single edge router IP (which can be a virtual IP (VIP), but is still a single machine for initial load balancing).

This process involves the following:

Administrator Prerequisites

Before starting this procedure, the administrator must:

  • Set up the external port to the cluster networking environment so that requests can reach the cluster. For example, names can be configured into DNS to point to specific nodes or other IP addresses in the cluster. The DNS wildcard feature can be used to configure a subset of names to an IP address in the cluster. This allows the users to set up routes within the cluster without further administrator attention.

  • Make sure that the local firewall on each node permits the request to reach the IP address.

  • Configure the OpenShift Container Platform cluster to use an identity provider that allows appropriate user access.

  • Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:

    oc adm policy add-cluster-role-to-user cluster-admin username
  • Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.

Defining the Public IP Range

The first step in allowing access to a service is to define an external IP address range in the master configuration file:

  1. Log into OpenShift Container Platform as a user with the cluster admin role.

    $ oc login
    Authentication required (openshift)
    Username: admin
    Password:
    Login successful.
    
    You have access to the following projects and can switch between them with 'oc project <projectname>':
      * default
    Using project "default".
  2. Configure the externalIPNetworkCIDRs parameter in the /etc/origin/master/master-config.yaml file as shown:

    networkConfig:
      externalIPNetworkCIDRs:
      - <ip_address>/<cidr>

    For example:

    networkConfig:
      externalIPNetworkCIDRs:
      - 192.168.120.0/24
  3. Restart the OpenShift Container Platform master service to apply the changes.

    # master-restart api
    # master-restart controllers

The IP address pool must terminate at one or more nodes in the cluster.

Create a Project and Service

If the project and service that you want to expose do not exist, first create the project, then the service.

If the project and service already exist, go to the next step: Expose the Service to Create a Route.

  1. Log into OpenShift Container Platform.

  2. Create a new project for your service:

    $ oc new-project <project_name>

    For example:

    $ oc new-project external-ip
  3. Use the oc new-app command to create a service:

    For example:

    $ oc new-app \
        -e MYSQL_USER=admin \
        -e MYSQL_PASSWORD=redhat \
        -e MYSQL_DATABASE=mysqldb \
        registry.access.redhat.com/openshift3/mysql-55-rhel7
  4. Run the following command to see that the new service is created:

    oc get svc
    NAME               CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    mysql-55-rhel7     172.30.131.89   <none>        3306/TCP   13m

    By default, the new service does not have an external IP address.

Expose the Service to Create a Route

You must expose the service as a route using the oc expose command.

To expose the service:

  1. Log into OpenShift Container Platform.

  2. Log into the project where the service you want to expose is located.

    $ oc project project1
  3. Run the following command to expose the route:

    oc expose service <service-name>

    For example:

    oc expose service mysql-55-rhel7
    route "mysql-55-rhel7" exposed
  4. On the master, use a tool, such as cURL, to make sure you can reach the service using the cluster IP address for the service:

    curl <pod-ip>:<port>

    For example:

    curl 172.30.131.89:3306

    The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the Got packets out of order message, you are connected to the service.

    If you have a MySQL client, log in with the standard CLI command:

    $ mysql -h 172.30.131.89 -u admin -p
    Enter password:
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    
    MySQL [(none)]>

Then, perform the following tasks:

Create the Load Balancer Service

To create a load balancer service:

  1. Log into OpenShift Container Platform.

  2. Load the project where the service you want to expose is located. If the project or service does not exist, see Create a Project and Service.

    $ oc project project1
  3. Open a text file on the master node and paste the following text, editing the file as needed:

    Example 1. Sample load balancer configuration file
    apiVersion: v1
    kind: Service
    metadata:
      name: egress-2 (1)
    spec:
      ports:
      - name: db
        port: 3306 (2)
      loadBalancerIP:
      type: LoadBalancer (3)
      selector:
        name: mysql (4)
    1 Enter a descriptive name for the load balancer service.
    2 Enter the same port that the service you want to expose is listening on.
    3 Enter loadbalancer as the type.
    4 Enter the name of the service.
  4. Save and exit the file.

  5. Run the following command to create the service:

    oc create -f <file-name>

    For example:

    oc create -f mysql-lb.yaml
  6. Execute the following command to view the new service:

    oc get svc
    NAME              CLUSTER-IP       EXTERNAL-IP                   PORT(S)                   AGE
    egress-2          172.30.236.167   172.29.121.74,172.29.121.74   3306/TCP                  6s

    Note that the service has an external IP address automatically assigned.

  7. On the master, use a tool, such as cURL, to make sure you can reach the service using the public IP address:

    $ curl <public-ip>:<port>

    ++ For example:

    $ curl 172.29.121.74:3306

    The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the Got packets out of order message, you are connecting with the service:

    If you have a MySQL client, log in with the standard CLI command:

    $ mysql -h 172.30.131.89 -u admin -p
    Enter password:
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    
    MySQL [(none)]>

Configuring Networking

The following steps are general guidelines for configuring the networking required to access the exposed service from other nodes. As network environments vary, consult your network administrator for specific configurations that need to be made within your environment.

These steps assume that all of the systems are on the same subnet.

On the Node:

  1. Restart the network to make sure the network is up.

    $ service network restart
    Restarting network (via systemctl):  [  OK  ]

    If the network is not up, you will receive error messages such as Network is unreachable when executing the following commands.

  2. Add a route between the IP address of the exposed service on the master and the IP address of the master host. If using a netmask for a networking route, use the netmask option, as well as the netmask to use:

    $ route add -net 172.29.121.74 netmask 255.255.0.0 gw 10.16.41.22 dev eth0
  3. Use a tool, such as cURL, to make sure you can reach the service using the public IP address:

    $ curl <public-ip>:<port>

    For example:

    curl 172.29.121.74:3306

    If you get a string of characters with the Got packets out of order message, your service is accessible from the node.

On the system that is not in the cluster:

  1. Restart the network to make sure the network is up.

    $ service network restart
    Restarting network (via systemctl):  [  OK  ]

    If the network is not up, you will receive error messages such as Network is unreachable when executing the following commands.

  2. Add a route between the IP address of the exposed service on master and the IP address of the master host. If using a netmask for a networking route, use the netmask option, as well as the netmask to use:

    $ route add -net 172.29.121.74 netmask 255.255.0.0 gw 10.16.41.22 dev eth0
  3. Make sure you can reach the service using the public IP address:

    $ curl <public-ip>:<port>

    For example:

    curl 172.29.121.74:3306

    If you get a string of characters with the Got packets out of order message, your service is accessible outside the cluster.

Configure IP Failover using VIPs

Optionally, an administrator can configure IP failover.

IP failover manages a pool of Virtual IP (VIP) addresses on a set of nodes. Every VIP in the set is serviced by a node selected from the set. As long as a single node is available, the VIPs will be served. There is no way to explicitly distribute the VIPs over the nodes. As such, there may be nodes with no VIPs and other nodes with multiple VIPs. If there is only one node, all VIPs will be on it.

The VIPs must be routable from outside the cluster.

To configure IP failover:

  1. On the master, make sure the ipfailover service account has sufficient security privileges:

    oc adm policy add-scc-to-user privileged -z ipfailover
  2. Run the following command to create the IP failover:

    oc adm ipfailover --virtual-ips=<exposed-ip-address> --watch-port=<exposed-port> --replicas=<number-of-pods> --create

    For example:

    oc adm ipfailover --virtual-ips="172.30.233.169" --watch-port=32315 --replicas=4 --create
    --> Creating IP failover ipfailover ...
        serviceaccount "ipfailover" created
        deploymentconfig "ipfailover" created
    --> Success