This is a cache of https://docs.openshift.com/container-platform/4.2/applications/deployments/what-deployments-are.html. It is a snapshot of the page at 2024-11-23T02:28:33.892+0000.
Understanding <strong>deployment</strong>s and <strong>deployment</strong>Configs - <strong>deployment</strong>s | Applications | OpenShift Container Platform 4.2
×

deployments and deploymentConfigs in OpenShift Container Platform are API objects that provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects:

  • A deploymentConfig or a deployment, either of which describes the desired state of a particular component of the application as a Pod template.

  • deploymentConfigs involve one or more ReplicationControllers, which contain a point-in-time record of the state of a deploymentConfig as a Pod template. Similarly, deployments involve one or more ReplicaSets, a successor of ReplicationControllers.

  • One or more Pods, which represent an instance of a particular version of an application.

Building blocks of a deployment

deployments and deploymentConfigs are enabled by the use of native Kubernetes API objects ReplicationControllers and ReplicaSets, respectively, as their building blocks.

Users do not have to manipulate ReplicationControllers, ReplicaSets, or Pods owned by deploymentConfigs or deployments. The deployment systems ensures changes are propagated appropriately.

If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a Custom deployment strategy.

The following sections provide further details on these objects.

ReplicationControllers

A ReplicationController ensures that a specified number of replicas of a Pod are running at all times. If Pods exit or are deleted, the ReplicationController acts to instantiate more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount.

A ReplicationController configuration consists of:

  • The number of replicas desired (which can be adjusted at runtime).

  • A Pod definition to use when creating a replicated Pod.

  • A selector for identifying managed Pods.

A selector is a set of labels assigned to the Pods that are managed by the ReplicationController. These labels are included in the Pod definition that the ReplicationController instantiates. The ReplicationController uses the selector to determine how many instances of the Pod are already running in order to adjust as needed.

The ReplicationController does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler.

The following is an example definition of a ReplicationController:

apiVersion: v1
kind: ReplicationController
metadata:
  name: frontend-1
spec:
  replicas: 1  (1)
  selector:    (2)
    name: frontend
  template:    (3)
    metadata:
      labels:  (4)
        name: frontend (5)
    spec:
      containers:
      - image: openshift/hello-openshift
        name: helloworld
        ports:
        - containerPort: 8080
          protocol: TCP
      restartPolicy: Always
1 The number of copies of the Pod to run.
2 The label selector of the Pod to run.
3 A template for the Pod the controller creates.
4 Labels on the Pod should include those from the label selector.
5 The maximum name length after expanding any parameters is 63 characters.

ReplicaSets

Similar to a ReplicationController, a ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time. The difference between a ReplicaSet and a ReplicationController is that a ReplicaSet supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements.

Only use ReplicaSets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. ReplicaSets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. deployments manage their ReplicaSets automatically, provide declarative updates to pods, and do not have to manually manage the ReplicaSets that they create.

The following is an example ReplicaSet definition:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend-1
  labels:
    tier: frontend
spec:
  replicas: 3
  selector: (1)
    matchLabels: (2)
      tier: frontend
    matchExpressions: (3)
      - {key: tier, operator: In, values: [frontend]}
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - image: openshift/hello-openshift
        name: helloworld
        ports:
        - containerPort: 8080
          protocol: TCP
      restartPolicy: Always
1 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined.
2 Equality-based selector to specify resources with labels that match the selector.
3 Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend.

deploymentConfigs

Building on ReplicationControllers, OpenShift Container Platform adds expanded support for the software development and deployment lifecycle with the concept of deploymentConfigs. In the simplest case, a deploymentConfig creates a new ReplicationController and lets it start up Pods.

However, OpenShift Container Platform deployments from deploymentConfigs also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the ReplicationController.

The deploymentConfig deployment system provides the following capabilities:

  • A deploymentConfig, which is a template for running applications.

  • Triggers that drive automated deployments in response to events.

  • User-customizable deployment strategies to transition from the previous version to the new version. A strategy runs inside a Pod commonly referred as the deployment process.

  • A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment.

  • Versioning of your application in order to support rollbacks either manually or automatically in case of deployment failure.

  • Manual replication scaling and autoscaling.

When you create a deploymentConfig, a ReplicationController is created representing the deploymentConfig’s Pod template. If the deploymentConfig changes, a new ReplicationController is created with the latest Pod template, and a deployment process runs to scale down the old ReplicationController and scale up the new one.

Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally.

The OpenShift Container Platform deploymentConfig object defines the following details:

  1. The elements of a ReplicationController definition.

  2. Triggers for creating a new deployment automatically.

  3. The strategy for transitioning between deployments.

  4. Lifecycle hooks.

Each time a deployment is triggered, whether manually or automatically, a deployer Pod manages the deployment (including scaling down the old ReplicationController, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment in order to retain its logs of the deployment. When a deployment is superseded by another, the previous ReplicationController is retained to enable easy rollback if needed.

Example deploymentConfig definition
apiVersion: v1
kind: deploymentConfig
metadata:
  name: frontend
spec:
  replicas: 5
  selector:
    name: frontend
  template: { ... }
  triggers:
  - type: ConfigChange (1)
  - imageChangeParams:
      automatic: true
      containerNames:
      - helloworld
      from:
        kind: ImageStreamTag
        name: hello-openshift:latest
    type: ImageChange  (2)
  strategy:
    type: Rolling      (3)
1 A ConfigChange trigger causes a new deployment to be created any time the ReplicationController template changes.
2 An ImageChange trigger causes a new deployment to be created each time a new version of the backing image is available in the named imagestream.
3 The default Rolling strategy makes a downtime-free transition between deployments.

deployments

Kubernetes provides a first-class, native API object type in OpenShift Container Platform called deployments. deployments serve as a descendant of the OpenShift Container Platform-specific deploymentConfig.

Like deploymentConfigs, deployments describe the desired state of a particular component of an application as a Pod template. deployments create ReplicaSets, which orchestrate Pod lifecycles.

For example, the following deployment definition creates a ReplicaSet to bring up one hello-openshift Pod:

deployment definition
apiVersion: apps/v1
kind: deployment
metadata:
  name: hello-openshift
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-openshift
  template:
    metadata:
      labels:
        app: hello-openshift
    spec:
      containers:
      - name: hello-openshift
        image: openshift/hello-openshift:latest
        ports:
        - containerPort: 80

Comparing deployments and deploymentConfigs

Both Kubernetes deployments and OpenShift Container Platform-provided deploymentConfigs are supported in OpenShift Container Platform; however, it is recommended to use deployments unless you need a specific feature or behavior provided by deploymentConfigs.

The following sections go into more detail on the differences between the two object types to further help you decide which type to use.

Design

One important difference between deployments and deploymentConfigs is the properties of the CAP theorem that each design has chosen for the rollout process. deploymentConfigs prefer consistency, whereas deployments take availability over consistency.

For deploymentConfigs, if a node running a deployer Pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding Pod. This means that you can not delete the Pod to unstick the rollout, as the kubelet is responsible for deleting the associated Pod.

However, deployments rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs.

deploymentConfigs-specific features

Automatic rollbacks

Currently, deployments do not support automatically rolling back to the last successfully deployed ReplicaSet in case of a failure.

Triggers

deployments have an implicit ConfigChange trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment:

$ oc rollout pause deployments/<name>
Lifecycle hooks

deployments do not yet support any lifecycle hooks.

Custom strategies

deployments do not support user-specified Custom deployment strategies yet.

deployments-specific features

Rollover

The deployment process for deployments is driven by a controller loop, in contrast to deploymentConfigs which use deployer pods for every new rollout. This means that a deployment can have as many active ReplicaSets as possible, and eventually the deployment controller will scale down all old ReplicaSets and scale up the newest one.

deploymentConfigs can have at most one deployer pod running, otherwise multiple deployers end up conflicting while trying to scale up what they think should be the newest ReplicationController. Because of this, only two ReplicationControllers can be active at any point in time. Ultimately, this translates to faster rapid rollouts for deployments.

Proportional scaling

Because the deployment controller is the sole source of truth for the sizes of new and old ReplicaSets owned by a deployment, it is able to scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each ReplicaSet.

deploymentConfigs cannot be scaled when a rollout is ongoing because the deploymentConfig controller will end up having issues with the deployer process about the size of the new ReplicationController.

Pausing mid-rollout

deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. On the other hand, you cannot pause deployer pods currently, so if you try to pause a deploymentConfig in the middle of a rollout, the deployer process will not be affected and will continue until it finishes.