This is a cache of https://docs.openshift.com/container-platform/3.3/dev_guide/deployments/advanced_deployment_strategies.html. It is a snapshot of the page at 2024-11-24T04:15:14.776+0000.
Advanced Deployment Strategies - Deployments | Developer Guide | OpenShift Container Platform 3.3
×

Blue-Green Deployment

Blue-green deployments involve running two versions of an application at the same time and moving production traffic from the old version to the new version. There are several ways to implement a blue-green deployment in OpenShift Container Platform.

When to Use a Blue-Green Deployment

Use a blue-green deployment when you want to test a new version of your application in a production environment before moving traffic to it.

Blue-green deployments make switching between two different versions of your application easy. However, since many applications depend on persistent data, you will need to have an application that supports N-1 compatibility if you share a database, or implement a live data migration between your database, store, or disk if you choose to create two copies of your data layer.

Blue-Green Deployment Example

In order to maintain control over two distinct groups of instances (old and new versions of the code), the blue-green deployment is best represented with multiple deployment configurations.

Using a route and Two Services

A route points to a service, and can be changed to point to a different service at any time. As a developer, test the new version of your code by connecting to the new service before your production traffic is routed to it. routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications.

  1. Create two copies of the example application:

    $ oc new-app openshift/deployment-example:v1 --name=example-green
    $ oc new-app openshift/deployment-example:v2 --name=example-blue

    This will create two independent application components: one running the v1 image under the example-green service, and one using the v2 image under the example-blue service.

  2. Create a route that points to the old service:

    $ oc expose svc/example-green --name=bluegreen-example
  3. Browse to the application at bluegreen-example.<project>.<router_domain> to verify you see the v1 image.

    On versions of OpenShift Container Platform older than v3.0.1, this command will generate a route at example-green.<project>.<router_domain>, not the above location.

  4. Edit the route and change the service name to example-blue:

    $ oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-blue"}}}'
  5. In your browser, refresh the page until you see the v2 image.

A/B Deployment

A/B deployments generally imply running two (or more) versions of the application code or application configuration at the same time for testing or experimentation purposes.

The simplest form of an A/B deployment is to divide production traffic between two or more distinct shards — a single group of instances with homogeneous configuration and code.

More complicated A/B deployments may involve a specialized proxy or load balancer that assigns traffic to specific shards based on information about the user or application (all "test" users get sent to the B shard, but regular users get sent to the A shard).

A/B deployments can be considered similar to A/B testing, although an A/B deployment implies multiple versions of code and configuration, where as A/B testing often uses one code base with application specific checks.

When to Use an A/B Deployment

  • When you want to test multiple versions of code or configuration, but are not planning to roll one out in preference to the other.

  • When you want to have different configuration in different regions.

An A/B deployment groups different configuration and code — multiple shards — together under a single logical endpoint. Generally, these deployments, if they access persistent data, should properly deal with N-1 compatibility (the more shards you have, the more possible versions you have running). Use this pattern when you need separate internal configuration and code, but end users should not be aware of the changes.

A/B Deployment Example

All A/B deployments are composite deployment types consisting of multiple deployment configurations.

One Service, Multiple Deployment Configurations

OpenShift Container Platform, through labels and deployment configurations, supports multiple simultaneous shards being exposed through the same service. To the consuming user, the shards are invisible. An example of the simplest possible sharding is described below:

  1. Create the first shard of the application based on the example deployment images:

    $ oc new-app openshift/deployment-example --name=ab-example-a --labels=ab-example=true SUBTITLE="shard A"
  2. Edit the newly created shard to set a label ab-example=true that will be common to all shards:

    $ oc edit dc/ab-example-a

    In the editor, add the line ab-example: "true" underneath spec.selector and spec.template.metadata.labels alongside the existing deploymentconfig=ab-example-a label. Save and exit the editor.

  3. Trigger a re-deployment of the first shard to pick up the new labels:

    $ oc deploy ab-example-a --latest
  4. Create a service that uses the common label:

    $ oc expose dc/ab-example-a --name=ab-example --selector=ab-example=true

    If you have the router installed, make the application available via a route (or use the service IP directly):

    $ oc expose svc/ab-example

    Browse to the application at ab-example.<project>.<router_domain> to verify you see the v1 image.

  5. Create a second shard based on the same source image as the first shard but different tagged version, and set a unique value:

    $ oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE="shard B" COLOR="red"
  6. Edit the newly created shard to set a label ab-example=true that will be common to all shards:

    $ oc edit dc/ab-example-b

    In the editor, add the line ab-example: "true" underneath spec.selector and spec.template.metadata.labels alongside the existing deploymentconfig=ab-example-b label. Save and exit the editor.

  7. Trigger a re-deployment of the second shard to pick up the new labels:

    $ oc deploy ab-example-b --latest
  8. At this point, both sets of pods are being served under the route. However, since both browsers (by leaving a connection open) and the router (by default, through a cookie) will attempt to preserve your connection to a back-end server, you may not see both shards being returned to you. To force your browser to one or the other shard, use the scale command:

    $ oc scale dc/ab-example-a --replicas=0

    Refreshing your browser should show v2 and shard B (in red).

    $ oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0

    Refreshing your browser should show v1 and shard A (in blue).

    If you trigger a deployment on either shard, only the pods in that shard will be affected. You can easily trigger a deployment by changing the SUBTITLE environment variable in either deployment config oc edit dc/ab-example-a or oc edit dc/ab-example-b. You can add additional shards by repeating steps 5-7.

    These steps will be simplified in future versions of OpenShift Container Platform.

Proxy Shard / Traffic Splitter

In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard, which forwards or splits the traffic it receives to a separate service or application running elsewhere.

In the simplest configuration, the proxy would forward requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes.

While an implementation is beyond the scope of this example, any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OpenShift Container Platform router with proportional balancing capabilities.

N-1 Compatibility

Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read by the old code. This is sometimes called schema evolution and is a complex problem.

For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional.

One way to validate N-1 compatibility is to use an A/B deployment. Run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment.

Graceful Termination

OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit.

On shutdown, OpenShift Container Platform will send a TERM signal to the processes in the container. Application code, on receiving SIGTERM, should stop accepting new connections. This will ensure that load balancers route traffic to other active instances. The application code should then wait until all open connections are closed (or gracefully terminate individual connections at the next opportunity) before exiting.

After the graceful termination period expires, a process that has not exited will be sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period (default 30 seconds) and may be customized per application as necessary.