RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y
Learn how to create your own container images, based on pre-built images that are ready to help you. The process includes learning best practices for writing images, defining metadata for images, testing images, and using a custom builder workflow to create images to use with OpenShift Container Platform. After you create an image, you can push it to the internal registry.
When creating container images to run on OpenShift Container Platform there are a number of best practices to consider as an image author to ensure a good experience for consumers of those images. Because images are intended to be immutable and used as-is, the following guidelines help ensure that your images are highly consumable and easy to use on OpenShift Container Platform.
The following guidelines apply when creating a container image in general, and are independent of whether the images are used on OpenShift Container Platform.
Wherever possible, we recommend that you base your image on an appropriate
upstream image using the FROM
statement. This ensures your image can easily
pick up security fixes from an upstream image when it is updated, rather than
you having to update your dependencies directly.
In addition, use tags in the FROM
instruction (for example, rhel:rhel7
) to
make it clear to users exactly which version of an image your image is based on.
Using a tag other than latest
ensures your image is not subjected to breaking
changes that might go into the latest
version of an upstream image.
When tagging your own images, we recommend that you try to maintain backwards compatibility within a tag. For example, if you provide an image named foo and it currently includes version 1.0, you might provide a tag of foo:v1. When you update the image, as long as it continues to be compatible with the original image, you can continue to tag the new image foo:v1, and downstream consumers of this tag will be able to get updates without being broken.
If you later release an incompatible update, then you should switch to a new tag, for example foo:v2. This allows downstream consumers to move up to the new version at will, but not be inadvertently broken by the new incompatible image. Any downstream consumer using foo:latest takes on the risk of any incompatible changes being introduced.
We recommend that you do not start multiple services, such as a database and SSHD, inside one container. This is not necessary because containers are lightweight and can be easily linked together for orchestrating multiple processes. OpenShift Container Platform allows you to easily colocate and co-manage related images by grouping them into a single pod.
This colocation ensures the containers share a network namespace and storage for communication. Updates are also less disruptive as each image can be updated less frequently and independently. Signal handling flows are also clearer with a single process as you do not have to manage routing signals to spawned processes.
exec
in wrapper scriptsMany images use wrapper scripts to do some setup before starting a process for the software being run. If your image uses such a script, that script should use exec
so that the script’s process is replaced by your software. If you do not use exec
, then signals sent by your container runtime will go to your wrapper script instead of your software’s process. This is not what you want, as illustrated here:
Say you have a wrapper script that starts a process for some server. You start your container (for example, using podman run -i
), which runs the wrapper script, which in turn starts your process. Now say that you want to kill your container with CTRL+C
. If your wrapper script used exec to start the server process, podman
will send SIGINT to the server process, and everything will work as you expect. If you didn’t use exec in your wrapper script, podman
will send SIGINT to the process for the wrapper script and your process will keep running like nothing happened.
Also note that your process runs as PID 1 when running in a container. This means that if your main process terminates, the entire container is stopped, killing any child processes you may have launched from your PID 1 process.
See the "Docker and the PID 1 zombie reaping problem" blog article for additional implications. Also see the "Demystifying the init system (PID 1)" blog article for a deep dive on PID 1 and init systems.
All temporary files you create during the build process should be removed. This
also includes any files added with the ADD
command. For example, we strongly
recommended that you run the yum clean
command after performing yum install
operations.
You can prevent the yum
cache from ending up in an image layer by creating
your RUN
statement as follows:
RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y
Note that if you instead write:
RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y
Then the first yum
invocation leaves extra files in that layer, and these
files cannot be removed when the yum clean
operation is run later. The extra
files are not visible in the final image, but they are present in the underlying
layers.
The current container build process does not allow a command run in a later layer
to shrink the space used by the image when something was removed in an earlier
layer. However, this may change in the future. This means that if you perform an
rm
command in a later layer, although the files are hidden it does not reduce
the overall size of the image to be downloaded. Therefore, as with the yum
clean
example, it is best to remove files in the same command that created
them, where possible, so they do not end up written to a layer.
In addition, performing multiple commands in a single RUN
statement reduces
the number of layers in your image, which improves download and extraction time.
The container builder reads the Dockerfile
and runs the instructions from top to
bottom. Every instruction that is successfully executed creates a layer which
can be reused the next time this or another image is built. It is very important
to place instructions that will rarely change at the top of your
Dockerfile
. Doing so ensures the next builds of the same image are
very fast because the cache is not invalidated by upper layer changes.
For example, if you are working on a Dockerfile
that contains an ADD
command to install a file you are iterating on, and a RUN
command to yum
install
a package, it is best to put the ADD
command last:
FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile
This way each time you edit myfile and rerun podman build
or docker build
, the system reuses
the cached layer for the yum
command and only generates the new layer for the
ADD
operation.
If instead you wrote the Dockerfile
as:
FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y
Then each time you changed myfile and reran podman build
or docker build
, the ADD
operation would invalidate the RUN
layer cache, so the yum
operation must be
rerun as well.
The EXPOSE instruction makes a port in the container available to the host system and other containers. While it is possible to specify that a port should be exposed with a podman run
invocation, using the EXPOSE instruction in a Dockerfile
makes it easier for both humans and software to use your image by explicitly declaring the ports your software needs to run:
Exposed ports will show up under podman ps
associated with containers created from your image
Exposed ports will also be present in the metadata for your image returned by podman inspect
Exposed ports will be linked when you link one container to another
It is good practice to set environment variables with the ENV
instruction.
One example is to set the version of your project. This makes it easy for people
to find the version without looking at the Dockerfile
. Another example is
advertising a path on the system that could be used by another process, such as
JAVA_HOME
.
It is best to avoid setting default passwords. Many people will extend the image and forget to remove or change the default password. This can lead to security issues if a user in production is assigned a well-known password. Passwords should be configurable using an environment variable instead.
If you do choose to set a default password, ensure that an appropriate warning message is displayed when the container is started. The message should inform the user of the value of the default password and explain how to change it, such as what environment variable to set.
It is best to avoid running sshd in your image. You can use the podman exec
or docker exec
command to access containers that are running on the local host. Alternatively,
you can use the oc exec
command or the oc rsh
command to access containers that are running on the OpenShift Container Platform cluster.
Installing and running sshd in your image opens up additional vectors for
attack and requirements for security patching.
Images should use a volume for persistent data. This way OpenShift Container Platform mounts the network storage to the node running the container, and if the container moves to a new node the storage is reattached to that node. By using the volume for all persistent storage needs, the content is preserved even if the container is restarted or moved. If your image writes data to arbitrary locations within the container, that content might not be preserved.
All data that needs to be preserved even after the container is destroyed must
be written to a volume. Container engines support a readonly
flag for
containers which can be used to strictly enforce good practices about not
writing data to ephemeral storage in a container. Designing your image around
that capability now will make it easier to take advantage of it later.
Furthermore, explicitly defining volumes in your Dockerfile
makes it easy
for consumers of the image to understand what volumes they must define when
running your image.
See the Kubernetes documentation for more information on how volumes are used in OpenShift Container Platform.
Even with persistent volumes, each instance of your image has its own volume, and the filesystem is not shared between instances. This means the volume cannot be used to share state in a cluster. |
Docker documentation - Best practices for writing Dockerfiles
Project Atomic documentation - Guidance for Container Image Authors
The following are guidelines that apply when creating container images specifically for use on OpenShift Container Platform.
For images that are intended to run application code provided by a third party, such as a Ruby image designed to run Ruby code provided by a developer, you can enable your image to work with the Source-to-Image (s2i) build tool. s2i is a framework which makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output.
For example, this Python image defines s2i scripts for building various versions of Python applications.
By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node.
For an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Files to be executed should also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image:
RUN chgrp -R 0 /some/directory && \ chmod -R g=u /some/directory
Because the container user is always a member of the root group, the container user can read and write these files.
Care must be taken when altering the directories and file permissions of sensitive areas of a container (no different than to a normal system). If applied to sensitive areas, such as |
In addition, the processes running in the container must not listen on privileged ports (ports below 1024), since they are not running as a privileged user.
If your s2i image does not include a USER declaration with a numeric user, your builds will fail by default. In order to allow images that use either named users or the root (0) user to build in OpenShift Container Platform, you can add the project’s builder service account (system:serviceaccount:<your-project>:builder) to the privileged security context constraint (SCC). Alternatively, you can allow all images to run as any user. |
For cases where your image needs to communicate with a service provided by another image, such as a web front end image that needs to access a database image to store and retrieve data, your image should consume an OpenShift Container Platform service. Services provide a static endpoint for access which does not change as containers are stopped, started, or moved. In addition, services provide load balancing for requests.
For images that are intended to run application code provided by a third party, ensure that your image contains commonly used libraries for your platform. In particular, provide database drivers for common databases used with your platform. For example, provide JDBC drivers for MySQL and PostgreSQL if you are creating a Java framework image. Doing so prevents the need for common dependencies to be downloaded during application assembly time, speeding up application image builds. It also simplifies the work required by application developers to ensure all of their dependencies are met.
Users of your image should be able to configure it without having to create a downstream image based on your image. This means that the runtime configuration should be handled using environment variables. For a simple configuration, the running process can consume the environment variables directly. For a more complicated configuration or for runtimes which do not support this, configure the runtime by defining a template configuration file that is processed during startup. During this processing, values supplied using environment variables can be substituted into the configuration file or used to make decisions about what options to set in the configuration file.
It is also possible and recommended to pass secrets such as certificates and keys into the container using environment variables. This ensures that the secret values do not end up committed in an image and leaked into a container image registry.
Providing environment variables allows consumers of your image to customize behavior, such as database settings, passwords, and performance tuning, without having to introduce a new layer on top of your image. Instead, they can simply define environment variable values when defining a pod and change those settings without rebuilding the image.
For extremely complex scenarios, configuration can also be supplied using volumes that would be mounted into the container at runtime. However, if you elect to do it this way you must ensure that your image provides clear error messages on startup when the necessary volume or configuration is not present.
This topic is related to the Using Services for Inter-image Communication topic in that configuration like datasources should be defined in terms of environment variables that provide the service endpoint information. This allows an application to dynamically consume a datasource service that is defined in the OpenShift Container Platform environment without modifying the application image.
In addition, tuning should be done by inspecting the cgroups settings for the container. This allows the image to tune itself to the available memory, CPU, and other resources. For example, Java-based images should tune their heap based on the cgroup maximum memory parameter to ensure they do not exceed the limits and get an out-of-memory error.
See the following references for more on how to manage cgroup quotas in containers:
Blog article - Resource management in Docker
Docker documentation - Runtime Metrics
Blog article - Memory inside Linux containers
Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that may also be needed.
You must fully understand what it means to run multiple instances of your image. In the simplest case, the load balancing function of a service handles routing traffic to all instances of your image. However, many frameworks must share information in order to perform leader election or failover state; for example, in session replication.
Consider how your instances accomplish this communication when running in OpenShift Container Platform. Although pods can communicate directly with each other, their IP addresses change anytime the pod starts, stops, or is moved. Therefore, it is important for your clustering scheme to be dynamic.
It is best to send all logging to standard out. OpenShift Container Platform collects standard out from containers and sends it to the centralized logging service where it can be viewed. If you must separate log content, prefix the output with an appropriate keyword, which makes it possible to filter the messages.
If your image logs to a file, users must use manual operations to enter the running container and retrieve or view the log file.
Document example liveness and readiness probes that can be used with your image. These probes will allow users to deploy your image with confidence that traffic will not be routed to the container until it is prepared to handle it, and that the container will be restarted if the process gets into an unhealthy state.
Consider providing an example template with your image. A template will give users an easy way to quickly get your image deployed with a working configuration. Your template should include the liveness and readiness probes you documented with the image, for completeness.
Defining image metadata helps OpenShift Container Platform better consume your container images, allowing OpenShift Container Platform to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that may also be needed.
This topic only defines the metadata needed by the current set of use cases. Additional metadata or use cases may be added in the future.
You can use the LABEL
instruction in a Dockerfile
to define image
metadata. Labels are similar to environment variables in that they are key value
pairs attached to an image or a container. Labels are different from environment
variable in that they are not visible to the running application and they can
also be used for fast look-up of images and containers.
Docker
documentation for more information on the LABEL
instruction.
The label names should typically be namespaced. The namespace should be set
accordingly to reflect the project that is going to pick up the labels and use
them. For OpenShift Container Platform the namespace should be set to io.openshift
and
for Kubernetes the namespace is io.k8s
.
See the Docker custom metadata documentation for details about the format.
Variable | Description |
---|---|
|
This label contains a list of tags represented as a list of comma-separated string values. The tags are the way to categorize the container images into broad areas of functionality. Tags help UI and generation tools to suggest relevant container images during the application creation process. LABEL io.openshift.tags mongodb,mongodb24,nosql |
|
Specifies a list of tags that the generation tools and the UI might use to provide relevant suggestions if you do not have the container images with specified tags already. For example, if the container image wants LABEL io.openshift.wants mongodb,redis |
|
This label can be used to give the container image consumers more detailed information about the service or functionality this image provides. The UI can then use this description together with the container image name to provide more human friendly information to end users. LABEL io.k8s.description The MySQL 5.5 Server with master-slave replication support |
|
An image might use this variable to suggest that it does not support scaling.
The UI will then communicate this to consumers of that image. Being not-scalable
basically means that the value of LABEL io.openshift.non-scalable true |
|
This label suggests how much resources the container image might need in order to work properly. The UI might warn the user that deploying this container image may exceed their user quota. The values must be compatible with Kubernetes quantity. LABEL io.openshift.min-memory 8Gi LABEL io.openshift.min-cpu 4 |
As an Source-to-Image (s2i) builder image author, you can test your s2i image locally and use the OpenShift Container Platform build system for automated testing and continuous integration.
s2i requires the assemble and run scripts to be present in order to successfully run the s2i build. Providing the save-artifacts script reuses the build artifacts, and providing the usage script ensures that usage information is printed to console when someone runs the container image outside of the s2i.
The goal of testing an s2i image is to make sure that all of these described commands work properly, even if the base container image has changed or the tooling used by the commands was updated.
The standard location for the test script is test/run. This script is invoked by the OpenShift Container Platform s2i image builder and it could be a simple Bash script or a static Go binary.
The test/run script performs the s2i build, so you must have the s2i binary
available in your $PATH
. If required, follow the installation instructions
in the
s2i
README.
s2i combines the application source code and builder image, so in order to test
it you need a sample application source to verify that the source successfully
transforms into a runnable container image. The sample application should be simple,
but it should exercise the crucial steps of assemble
and run
scripts.
The s2i tooling comes with powerful generation tools to speed up the process of
creating a new s2i image. The s2i create
command produces all the necessary s2i
scripts and testing tools along with the Makefile:
$ s2i create _<image name>_ _<destination directory>_
The generated test/run script must be adjusted to be useful, but it provides a good starting point to begin developing.
The test/run script produced by the |
The easiest way to run the s2i image tests locally is to use the generated Makefile.
If you did not use the s2i create
command, you can copy the
following Makefile template and replace the IMAGE_NAME
parameter with
your image name.
IMAGE_NAME = openshift/ruby-20-centos7 CONTAINER_ENGINE := $(shell command -v podman 2> /dev/null | echo docker) build: ${CONTAINER_ENGINE} build -t $(IMAGE_NAME) . .PHONY: test test: ${CONTAINER_ENGINE} build -t $(IMAGE_NAME)-candidate . IMAGE_NAME=$(IMAGE_NAME)-candidate test/run
The test script assumes you have already built the image you want to test. If required, first build the s2i image. Run one of the following commands:
If you use Podman, run the following command:
$ podman build -t _<BUILDER_IMAGE_NAME>_
If you use Docker, run the following command:
$ docker build -t _<BUILDER_IMAGE_NAME>_
The following steps describe the default workflow to test s2i image builders:
Verify the usage script is working:
If you use Podman, run the following command:
$ podman run _<BUILDER_IMAGE_NAME>_ .
If you use Docker, run the following command:
$ docker run _<BUILDER_IMAGE_NAME>_ .
Build the image:
$ s2i build file:///path-to-sample-app _<BUILDER_IMAGE_NAME>_ _<OUTPUT_APPLICATION_IMAGE_NAME>_
Optional: if you support save-artifacts, run step 2 once again to verify that saving and restoring artifacts works properly.
Run the container:
If you use Podman, run the following command:
$ podman run _<OUTPUT_APPLICATION_IMAGE_NAME>_
If you use Docker, run the following command:
$ docker run _<OUTPUT_APPLICATION_IMAGE_NAME>_
Verify the container is running and the application is responding.
Running these steps is generally enough to tell if the builder image is working as expected.
Once you have a Dockerfile
and the other artifacts that make up your new
s2i builder image, you can put them in a git repository and use OpenShift Container Platform
to build and push the image. Simply define a Docker build that points
to your repository.
If your OpenShift Container Platform instance is hosted on a public IP address, the build can be triggered each time you push into your s2i builder image GitHub repository.
You can also use the ImageChangeTrigger
to trigger a rebuild of your applications that are
based on the s2i builder image you updated.