$ oc rollout latest dc/<name>
Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
Follow this guide to create an Azure Red Hat OpenShift 4 cluster. If you have specific questions, please contact us
You can start a new deployment process manually using the web console, or from the CLI:
$ oc rollout latest dc/<name>
If a deployment process is already in progress, the command will display a message and a new replication controller will not be deployed. |
To get basic information about all the available revisions of your application:
$ oc rollout history dc/<name>
This will show details about all recently created replication controllers for the provided deployment configuration, including any currently running deployment process.
You can view details specific to a revision by using the --revision
flag:
$ oc rollout history dc/<name> --revision=1
For more detailed information about a deployment configuration and its latest revision:
$ oc describe dc <name>
The web console shows deployments in the Browse tab. |
If the current revision of your deployment configuration failed to deploy, you can restart the deployment process with:
$ oc rollout retry dc/<name>
If the latest revision of it was deployed successfully, the command will display a message and the deployment process will not be retried.
Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller will have the same configuration it had when it failed. |
Rollbacks revert an application back to a previous revision and can be performed using the REST API, the CLI, or the web console.
To rollback to the last successful deployed revision of your configuration:
$ oc rollout undo dc/<name>
The deployment configuration’s template will be reverted to match the deployment
revision specified in the undo command, and a new replication controller will be
started. If no revision is specified with --to-revision
, then the last
successfully deployed revision will be used.
Image change triggers on the deployment configuration are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete. To re-enable the image change triggers:
$ oc set triggers dc/<name> --auto
Deployment configurations also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations. |
You can add a command to a container, which modifies the container’s startup
behavior by overruling the image’s ENTRYPOINT
. This is different from a
lifecycle hook,
which instead can be run once per deployment at a specified time.
Add the command
parameters to the spec
field of the deployment
configuration. You can also add an args
field, which modifies the
command
(or the ENTRYPOINT
if command
does not exist).
... spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>' ...
For example, to execute the java
command with the -jar
and
/opt/app-root/springboots2idemo.jar arguments:
... spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar ...
To stream the logs of the latest revision for a given deployment configuration:
$ oc logs -f dc/<name>
If the latest revision is running or failed, oc logs
will return the logs of
the process that is responsible for deploying your pods. If it is successful,
oc logs
will return the logs from a pod of your application.
You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually:
$ oc logs --version=1 dc/<name>
For more options on retrieving logs see:
$ oc logs --help
A deployment configuration can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster.
If no triggers are defined on a deployment configuration, a |
The ConfigChange
trigger results in a new replication controller whenever
changes are detected in the pod template of the deployment configuration.
If a |
triggers:
- type: "ConfigChange"
The ImageChange
trigger results in a new replication controller whenever the
content of an
image
stream tag changes (when a new version of the image is pushed).
triggers:
- type: "ImageChange"
imageChangeParams:
automatic: true (1)
from:
kind: "ImageStreamTag"
name: "origin-ruby-sample:latest"
namespace: "myproject"
containerNames:
- "helloworld"
1 | If the imageChangeParams.automatic field is set to false ,
the trigger is disabled. |
With the above example, when the latest
tag value of the origin-ruby-sample
image stream changes and the new image value differs from the current image
specified in the deployment configuration’s helloworld container, a new
replication controller is created using the new image for the helloworld container.
If an |
The oc set triggers
command can be used to set a deployment trigger for a
deployment configuration. For the example above, you can set the
ImageChangeTrigger
by using the following command:
$ oc set triggers dc/frontend --from-image=myproject/origin-ruby-sample:latest -c helloworld
For more information, see:
$ oc set triggers --help
A deployment is completed by a deployment pod. By default, a deployment pod consumes unbounded node resources on the compute node where it is scheduled. In most cases, the unbound resource consumption does not cause a problem, because deployment pods consume low resources and run for a short period of time. If a project specifies default container limits, the resources used by a deployment pod, along with any other pods, count against those limits.
You can limit the resources used by a deployment pod through the deployment strategy in the deployment configuration. Resource limits for a deployment pod can be used with the Recreate, Rolling, or Custom deployment strategy.
The ability to limit ephemeral storage is available only if an administrator enables the ephemeral storage technology preview. This feature is disabled by default. |
In the following example, each of resources
, cpu
, memory
, and ephemeral-storage
is
optional:
type: "Recreate"
resources:
limits:
cpu: "100m" (1)
memory: "256Mi" (2)
ephemeral-storage: "1Gi" (3)
1 | cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). |
2 | memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). |
3 | ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30).
The ephemeral-storage parameter is available only if an administrator enables the ephemeral storage technology preview. |
However, if a quota has been defined for your project, one of the following two items is required:
A resources
section set with an explicit requests
:
type: "Recreate"
resources:
requests: (1)
cpu: "100m"
memory: "256Mi"
ephemeral-storage: "1Gi"
1 | The requests object contains the list of resources that correspond to
the list of resources in the quota. |
See Quotas and Limit Ranges to learn more about compute resources and the differences between requests and limits.
A limit range defined in your project, where the
defaults from the LimitRange
object apply to pods created during the
deployment process.
Otherwise, deploy pod creation will fail, citing a failure to satisfy quota.
In addition to rollbacks, you can exercise fine-grained control over
the number of replicas from the web console, or by using the oc scale
command.
For example, the following command sets the replicas in the deployment
configuration frontend
to 3.
$ oc scale dc frontend --replicas=3
The number of replicas eventually propagates to the desired and current
state of the deployment configured by the deployment configuration frontend
.
Pods can also be autoscaled using the |
You can run a pod with a service account other than the default:
Edit the deployment configuration:
$ oc edit dc/<deployment_config>
Add the serviceAccount
and serviceAccountName
parameters to the spec
field, and specify the service account you want to use:
spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>