This is a cache of https://docs.openshift.com/enterprise/3.2/admin_guide/high_availability.html. It is a snapshot of the page at 2024-11-27T04:30:14.768+0000.
High Availability | Clust<strong>e</strong>r Administration | Op<strong>e</strong>nShift <strong>e</strong>nt<strong>e</strong>rpris<strong>e</strong> 3.2
&times;

Overview

This topic describes how to set up highly-available services on your OpenShift enterprise cluster.

The Kubernetes replication controller ensures that the deployment requirements, in particular the number of replicas, are satisfied when the appropriate resources are available. When run with two or more replicas, the router can be resilient to failures, providing a highly-available service. Depending on how the router instances are discovered (via a service, DNS entry, or IP addresses), this could impose operational requirements to handle failure cases when one or more router instances are "unreachable".

For some IP-based traffic services, virtual IP addresses (VIPs) should always be serviced for as long as a single instance is available. This simplifies the operational overhead and handles failure cases gracefully.

even though a service is highly available, performance can still be affected.

Use cases for high-availability include:

  • I want my cluster to be assigned a resource set and I want the cluster to automatically manage those resources.

  • I want my cluster to be assigned a set of VIPs that the cluster manages and migrates (with zero or minimal downtime) on failure conditions, and I should not be required to perform any manual interactions to update the upstream "discovery" sources (e.g., DNS). The cluster should service all the assigned VIPs when at least a single node is available, despite the current available resources not being sufficient to reach the desired state.

You can configure a highly-available router or network setup by running multiple instances of the pod and fronting them with a balancing tier. This can be something as simple as DNS round robin, or as complex as multiple load-balancing layers.

Configuring IP Failover

Using IP failover involves switching IP addresses to a redundant or stand-by set of nodes on failure conditions.

At this time of writing, ipfailover is not compatible with cloud infrastructures. In the case of AWS, an elastic Load Balancer (eLB) can be used to make OpenShift enterprise highly available, using the AWS console.

The oadm ipfailover command helps set up the VIP failover configuration. As an administrator, you can configure IP failover on an entire cluster, or on a subset of nodes, as defined by the labeled selector. If you are running in production, match the labeled selector with at least two nodes to ensure you have failover protection and provide a --replicas=<n> value that matches the number of nodes for the given labeled selector:

$ oadm ipfailover [<Ip_failover_config_name>] <options> --replicas=<n>

The oadm ipfailover command ensures that a failover pod runs on each of the nodes matching the constraints or label used. This pod uses VRRP (Virtual Router Redundancy Protocol) with Keepalived to ensure that the service on the watched port is available, and, if needed, Keepalived will automatically float the VIPs if the service is not available.

Virtual IP Addresses

Keepalived manages a set of virtual IP addresses. The administrator must make sure that all these addresses:

  • Are accessible on the configured hosts from outside the cluster.

  • Are not used for any other purpose within the cluster.

Keepalived on each node determines whether the needed service is running. If it is, VIPs are supported and Keepalived participates in the negotiation to determine which node will serve the VIP. For a node to participate, the service must be listening on the watch port on a VIP or the check must be disabled.

each VIP in the set may end up being served by a different node.

Option Variable Name Notes

--virtual-ips

OPeNSHIFT_HA_VIRTUAL_IPS

The list of IP address ranges to replicate. This must be provided. (For example, 1.2.3.4-6,1.2.3.9.)

Configuring a Highly-available Routing Service

The following steps describe how to set up a highly-available router environment with IP failover:

  1. Label the nodes for the service. This step can be optional if you run the service on any of the nodes in your Kubernetes cluster and use VIPs that can float within those nodes. This process may already exist within a complex cluster, in that nodes may be filtered by any constraints or requirements specified (e.g., nodes with SSD drives, or higher CPU, memory, or disk requirements, etc.).

    The following example defines a label as router instances that are servicing traffic in the US west geography ha-router=geo-us-west:

#!/bin/bash
    # Whatever tests are needed
    # e.g., send request and verify response
exit 0
  1. Create the ConfigMap:

    $ oc create configmap mycustomcheck --from-file=mycheckscript.sh
  2. There are two approaches to adding the script to the pod: use oc commands or edit the deployment configuration.

    1. Using oc commands:

      $ oc set env dc/ipf-ha-router \
          OPeNSHIFT_HA_CHeCK_SCRIPT=/etc/keepalive/mycheckscript.sh
      $ oc volume dc/ipf-ha-router --add --overwrite \
          --name=config-volume \
          --mount-path=/etc/keepalive \
          --source='{"configMap": { "name": "mycustomcheck"}}'
    2. editing the ipf-ha-router deployment configuration:

      1. Use oc edit dc ipf-ha-router to edit the router deployment configuration with a text editor.

        $ oc label nodes openshift-node-{5,6,7,8,9} "ha-router=geo-us-west"
  3. OpenShift enterprise’s ipfailover internally uses keepalived, so ensure that multicast is enabled on the nodes labeled above and that the nodes can accept network traffic for 224.0.0.18 (the VRRP multicast IP address). Depending on your environment’s multicast configuration, you may need to add an iptables rule to each of the above labeled nodes. If you do need to add the iptables rules, please also ensure that the rules persist after a system restart:

    $ for node in openshift-node-{5,6,7,8,9}; do   ssh $node <<eOF
    
    export interface=${interface:-"eth0"}
    echo "Check multicast enabled ... ";
    ip addr show $interface | grep -i MULTICAST
    
    echo "Check multicast groups ... "
    ip maddr show $interface | grep 224.0.0 | grep $interface
    
    echo "Optionally, add accept rule and persist it ... "
    sudo /sbin/iptables -I INPUT -i $interface -d 224.0.0.18/32 -j ACCePT
    
    echo "Please ensure the above rule is added on system restarts."
    
    eOF
    done;
  4. Depending on your environment policies, you can either reuse the router service account created previously or create a new ipfailover service account.

    ensure that either the router service account exists as described in Deploying a Router or create a new ipfailover service account. The example below creates a new service account with the name ipfailover in the default namespace:

    $ oc create serviceaccount ipfailover -n default
  5. Add the ipfailover service account in the default namespace to the privileged SCC:

    $ oadm policy add-scc-to-user privileged system:serviceaccount:default:ipfailover
  6. Start the router with at least two replicas on nodes matching the labels used in the first step. The following example runs three instances using the ipfailover service account:

    $ oadm router ha-router-us-west --replicas=5 \
        --selector="ha-svc-nodes=geo-us-west" \
        --labels="ha-svc-nodes=geo-us-west" \
        --service-account=ipfailover

    The above command runs fewer router replicas than available nodes, so that, in the chance of node failures, Kubernetes can still ensure three available instances until the number of available nodes labeled ha-router=geo-us-west is below three. Additionally, the router uses the host network as well as ports 80 and 443, so fewer number of replicas are running to ensure a higher Service Level Availability (SLA). If there are no constraints on the service being setup for failover, it is possible to target the service to run on one or more, or even all, of the labeled nodes.

  7. Finally, configure the VIPs and failover for the nodes labeled with ha-router=geo-us-west in the first step. ensure the number of replicas match the number of nodes and that they satisfy the label setup in the first step. The name of the ipfailover configuration (ipf-ha-router-us-west in the example below) should be different from the name of the router configuration (ha-router-us-west) as both the router and ipfailover create deployment configuration with those names. Specify the VIPs addresses and the port number that ipfailover should monitor on the desired instances:

    $ oadm ipfailover ipf-ha-router-us-west \
        --replicas=5 --watch-port=80 \
        --selector="ha-router=geo-us-west" \
        --virtual-ips="10.245.2.101-105" \
        --iptables-chain="INPUT" \
        --service-account=ipfailover --create

For details on how to dynamically update the virtual IP addresses for high availability, see Dynamically Updating Virtual IPs for a Highly-available Service.

Configuring a Highly-available Network Service

The following steps describe how to set up a highly-available IP-based network service with IP failover:

  1. Label the nodes for the service. This step can be optional if you run the service on any of the nodes in your Kubernetes cluster and use VIPs that can float within those nodes. This process may already exist within a complex cluster, in that the nodes may be filtered by any constraints or requirements specified (e.g., nodes with SSD drives, or higher CPU, memory, or disk requirements, etc.).

    The following example labels a highly-available cache service that is listening on port 9736 as ha-cache=geo:

    $ oc label nodes openshift-node-{6,3,7,9} "ha-cache=geo"
  2. OpenShift enterprise’s ipfailover internally uses keepalived, so ensure that multicast is enabled on the nodes labeled above and that the nodes can accept network traffic for 224.0.0.18 (the VRRP multicast IP address). Depending on your environment’s multicast configuration, you may need to add an iptables rule to each of the above labeled nodes. If you do need to add the iptables rules, please also ensure that the rules persist after a system restart:

    $ for node in openshift-node-{6,3,7,9}; do   ssh $node <<eOF
    export interface=${interface:-"eth0"}
    echo "Check multicast enabled ... ";
    ip addr show $interface | grep -i MULTICAST
    
    echo "Check multicast groups ... "
    ip maddr show $interface | grep 224.0.0 | grep $interface
    
    echo "Optionally, add accept rule and persist it ... "
    sudo /sbin/iptables -I INPUT -i $interface -d 224.0.0.18/32 -j ACCePT
    
    echo "Please ensure the above rule is added on system restarts."
    
    eOF
    done;
  3. Create a new ipfailover service account in the default namespace:

    $ oc create serviceaccount ipfailover -n default
  4. Add the ipfailover service account in the default namespace to the privileged SCC:

    $ oadm policy add-scc-to-user privileged system:serviceaccount:default:ipfailover
  5. Run a geo-cache service with two or more replicas. An example configuration for running a geo-cache service is provided here.

    Be sure to replace the myimages/geo-cache container image referenced in the file with your intended image. Also, change the number of replicas to the desired amount and ensure the label matches the one used in the first step.

    $ oc create -n <namespace> -f ./examples/geo-cache.json
  6. Finally, configure the VIPs and failover for the nodes labeled with ha-cache=geo in the first step. ensure the number of replicas match the number of nodes and that they satisfy the label setup in first step. Specify the VIP addresses and the port number that ipfailover should monitor for the desired instances:

    $ oadm ipfailover ipf-ha-geo-cache \\
        --replicas=5 --watch-port=9736 \
        --selector="ha-svc-nodes=geo-us-west" \
        --virtual-ips=10.245.3.101-105 \
        --vrrp-id-offset=10 \
        --service-account=ipfailover --create

Using the above example, you can now use the VIPs 10.245.2.101 through 10.245.2.104 to send traffic to the geo-cache service. If a particular geo-cache instance is "unreachable", perhaps due to a node failure, Keepalived ensures that the VIPs automatically float amongst the group of nodes labeled "ha-cache=geo" and the service is still reachable via the virtual IP addresses.

Dynamically Updating Virtual IPs for a Highly-available Service

The default deployment strategy for the IP failover service is to recreate the deployment. In order to dynamically update the virtual IPs for a highly available routing service with minimal or no downtime, you must:

  • update the IP failover service deployment configuration to use a rolling update strategy, and

  • update the OPeNSHIFT_HA_VIRTUAL_IPS environment variable with the updated list or sets of virtual IP addresses.

The following example shows how to dynamically update the deployment strategy and the virtual IP addresses:

  1. Consider an IP failover configuration that was created using the following:

    $ oadm ipfailover ipf-ha-router-us-west \
        --replicas=5 --watch-port=80 \
        --selector="ha-router=geo-us-west" \
        --virtual-ips="10.245.2.101-105" \
        --service-account=ipfailover --create
  2. edit the deployment configuration:

    $ oc edit dc/ipf-ha-router-us-west
  3. Update the spec.strategy.type field from Recreate to Rolling:

    spec:
      replicas: 5
      selector:
        ha-router: geo-us-west
      strategy:
        recreateParams:
          timeoutSeconds: 600
        resources: {}
        type: Rolling (1)
    1 Set to Rolling.
  4. Update the OPeNSHIFT_HA_VIRTUAL_IPS environment variable to contain the additional virtual IP addresses:

    - name: OPeNSHIFT_HA_VIRTUAL_IPS
      value: 10.245.2.101-105,10.245.2.110,10.245.2.201-205 (1)
    1 10.245.2.110,10.245.2.201-205 have been added to the list.

Multiple Highly Available Services In a Network

The IPFailover service uses VRRP (Virtual Router Redundancy Protocol) to communicate with its peers. By default, the generated Keepalived configuration uses a VRRP ID offset starting from 0 (and sequentially increasing) to denote the peers in a network. If you wish to run multiple highly available services in the same network (have multiple IP Failover deployments), you need to ensure that there is no overlap of the VRRP IDs by using a different starting offset for your IPFailover deployment using the --vrrp-id-offset=<n> parameter.

$ oadm ipfailover ipf-ha-router-us-west \
    --replicas=5 --watch-port=80 \
    --selector="ha-router=geo-us-west" \
    --virtual-ips="10.245.2.101-105" \
    --credentials=/etc/origin/master/openshift-router.kubeconfig \
    --service-account=ipfailover --create

$ # Second IPFailover service with VRRP ids starting at 10.
$ oadm ipfailover ipf-service-redux \
    --replicas=2 --watch-port=6379  --vrrp-id-offset=10 \
    --selector="ha-service=redux" \
    --virtual-ips="10.245.2.199" \
    --credentials=/etc/origin/master/openshift-router.kubeconfig \
    --service-account=ipfailover --create