This is a cache of https://docs.openshift.com/container-platform/3.4/dev_guide/deployments/deployment_strategies.html. It is a snapshot of the page at 2024-11-23T04:10:26.454+0000.
D<strong>e</strong>ploym<strong>e</strong>nt Strat<strong>e</strong>gi<strong>e</strong>s - D<strong>e</strong>ploym<strong>e</strong>nts | D<strong>e</strong>v<strong>e</strong>lop<strong>e</strong>r Guid<strong>e</strong> | Op<strong>e</strong>nShift Contain<strong>e</strong>r Platform 3.4
&times;

What Are Deployment Strategies?

A deployment strategy determines the deployment process, and is defined by the deployment configuration. each application has different requirements for availability (and other considerations) during deployments. OpenShift Container Platform provides strategies to support a variety of deployment scenarios.

A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the deployment configuration will retry to run the pod until it times out. The default timeout is 10m, a value set in TimeoutSeconds in dc.spec.strategy.*params.

The Rolling strategy is the default strategy used if no strategy is specified on a deployment configuration.

Rolling Strategy

A rolling deployment slowly replaces instances of the previous version of an application with instances of the new version of the application. A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted.

Canary Deployments

All rolling deployments in OpenShift Container Platform are canary deployments; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the deployment configuration will be automatically rolled back. The readiness check is part of the application code, and may be as sophisticated as necessary to ensure the new instance is ready to be used. If you need to implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy.

When to Use a Rolling Deployment

  • When you want to take no downtime during an application update.

  • When your application supports having old code and new code running at the same time.

A rolling deployment means you to have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility, that data stored by the new version can be read and handled (or gracefully ignored) by the old version of the code. This can take many forms — data stored on disk, in a database, in a temporary cache, or that is part of a user’s browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it.

The following is an example of the Rolling strategy:

strategy:
  type: Rolling
  rollingParams:
    updatePeriodSeconds: 1 (1)
    intervalSeconds: 1 (2)
    timeoutSeconds: 120 (3)
    maxSurge: "20%" (4)
    maxUnavailable: "10%" (5)
    pre: {} (6)
    post: {}
1 The time to wait between individual pod updates. If unspecified, this value defaults to 1.
2 The time to wait between polling the deployment status after update. If unspecified, this value defaults to 1.
3 The time to wait for a scaling event before giving up. Optional; the default is 600. Here, giving up means automatically rolling back to the previous complete deployment.
4 maxSurge is optional and defaults to 25% if not specified. See the information below the following procedure.
5 maxUnavailable is optional and defaults to 25% if not specified. See the information below the following procedure.
6 pre and post are both lifecycle hooks.

The Rolling strategy will:

  1. execute any pre lifecycle hook.

  2. Scale up the new replication controller based on the surge count.

  3. Scale down the old replication controller based on the max unavailable count.

  4. Repeat this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero.

  5. execute any post lifecycle hook.

When scaling down, the Rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure.

The maxUnavailable parameter is the maximum number of pods that can be unavailable during the update. The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10%) or an absolute value (e.g., 2). The default value for both is 25%.

These parameters allow the deployment to be tuned for availability and speed. For example:

  • maxUnavailable=0 and maxSurge=20% ensures full capacity is maintained during the update and rapid scale up.

  • maxUnavailable=10% and maxSurge=0 performs an update using no extra capacity (an in-place update).

  • maxUnavailable=10% and maxSurge=10% scales up and down quickly with some potential for capacity loss.

Generally, if you want fast rollouts, use maxSurge. If you need to take into account resource quota and can accept partial unavailability, use maxUnavailable.

Rolling example

Rolling deployments are the default in OpenShift Container Platform. To see a rolling update, follow these steps:

  1. Create an application based on the example deployment images found in DockerHub:

    $ oc new-app openshift/deployment-example

    If you have the router installed, make the application available via a route (or use the service IP directly)

    $ oc expose svc/deployment-example

    Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image.

  2. Scale the deployment configuration up to three replicas:

    $ oc scale dc/deployment-example --replicas=3
  3. Trigger a new deployment automatically by tagging a new version of the example as the latest tag:

    $ oc tag deployment-example:v2 deployment-example:latest
  4. In your browser, refresh the page until you see the v2 image.

  5. If you are using the CLI, the following command will show you how many pods are on version 1 and how many are on version 2. In the web console, you should see the pods slowly being added to v2 and removed from v1.

    $ oc describe dc deployment-example

During the deployment process, the new replication controller is incrementally scaled up. Once the new pods are marked as ready (by passing their readiness check), the deployment process will continue. If the pods do not become ready, the process will abort, and the deployment configuration will be rolled back to its previous version.

Recreate Strategy

The Recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process.

The following is an example of the Recreate strategy:

strategy:
  type: Recreate
  recreateParams: (1)
    pre: {} (2)
    mid: {}
    post: {}
1 recreateParams are optional.
2 pre, mid, and post are lifecycle hooks.

The Recreate strategy will:

  1. execute any pre lifecycle hook.

  2. Scale down the previous deployment to zero.

  3. execute any mid lifecycle hook.

  4. Scale up the new deployment.

  5. execute any post lifecycle hook.

During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure.

When to Use a Recreate Deployment

  • When you must run migrations or other data transformations before your new code starts.

  • When you do not support having new and old versions of your application code running at the same time.

  • When you want to use a RWO volume, which is not supported being shared between multiple replicas.

A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time.

Custom Strategy

The Custom strategy allows you to provide your own deployment behavior.

The following is an example of the Custom strategy:

strategy:
  type: Custom
  customParams:
    image: organization/strategy
    command: [ "command", "arg1" ]
    environment:
      - name: eNV_1
        value: VALUe_1

In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image’s Dockerfile. The optional environment variables provided are added to the execution environment of the strategy process.

Additionally, OpenShift Container Platform provides the following environment variables to the deployment process:

environment Variable Description

OPeNSHIFT_DePLOYMeNT_NAMe

The name of the new deployment (a replication controller).

OPeNSHIFT_DePLOYMeNT_NAMeSPACe

The name space of the new deployment.

The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user.

Learn more about advanced deployment strategies.

Alternatively, use customParams to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image, but the default OpenShift Container Platform deployer image will be used instead:

strategy:
  type: Rolling
  customParams:
    command:
    - /bin/sh
    - -c
    - |
      set -e
      openshift-deploy --until=50%
      echo Halfway there
      openshift-deploy
      echo Complete

This will result in following deployment:

Started deployment #2
--> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling custom-deployment-2 up to 1
--> Reached 50% (currently 50%)
Halfway there
--> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    Scaling custom-deployment-1 down to 1
    Scaling custom-deployment-2 up to 2
    Scaling custom-deployment-1 down to 0
--> Success
Complete

If the custom deployment strategy process requires access to the OpenShift Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication.

Lifecycle Hooks

The Recreate and Rolling strategies support lifecycle hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy:

The following is an example of a pre lifecycle hook:

pre:
  failurePolicy: Abort
  execNewPod: {} (1)
1 execNewPod is a pod-based lifecycle hook.

every hook has a failurePolicy, which defines the action the strategy should take when a hook failure is encountered:

Abort

The deployment process will be considered a failure if the hook fails.

Retry

The hook execution should be retried until it succeeds.

Ignore

Any hook failure should be ignored and the deployment should proceed.

Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the execNewPod field.

Pod-based Lifecycle Hook

Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a deployment configuration.

The following simplified example deployment configuration uses the Rolling strategy. Triggers and some other minor details are omitted for brevity:

kind: DeploymentConfig
apiVersion: v1
metadata:
  name: frontend
spec:
  template:
    metadata:
      labels:
        name: frontend
    spec:
      containers:
        - name: helloworld
          image: openshift/origin-ruby-sample
  replicas: 5
  selector:
    name: frontend
  strategy:
    type: Rolling
    rollingParams:
      pre:
        failurePolicy: Abort
        execNewPod:
          containerName: helloworld (1)
          command: [ "/usr/bin/command", "arg1", "arg2" ] (2)
          env: (3)
            - name: CUSTOM_VAR1
              value: custom_value1
          volumes:
            - data (4)
1 The helloworld name refers to spec.template.spec.containers[0].name.
2 This command overrides any eNTRYPOINT defined by the openshift/origin-ruby-sample image.
3 env is an optional set of environment variables for the hook container.
4 volumes is an optional set of volume references for the hook container.

In this example, the pre hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod will have the following properties:

  • The hook command will be /usr/bin/command arg1 arg2.

  • The hook container will have the CUSTOM_VAR1=custom_value1 environment variable.

  • The hook failure policy is Abort, meaning the deployment process will fail if the hook fails.

  • The hook pod will inherit the data volume from the deployment configuration pod.

Using the Command Line

The oc set deployment-hook command can be used to set the deployment hook for a deployment configuration. For the example above, you can set the pre-deployment hook with the following command:

$ oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \
  -v data --failure-policy=abort -- /usr/bin/command arg1 arg2