This is a cache of https://docs.openshift.com/enterprise/3.2/dev_guide/builds.html. It is a snapshot of the page at 2024-11-23T04:24:09.602+0000.
Builds | Developer Guide | OpenShift Enterprise 3.2
×

Overview

A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform source code into a runnable image.

Build configurations are characterized by a strategy and one or more sources. The strategy determines the aforementioned process, while the sources provide its input.

There are three build strategies:

And there are four types of build source:

It is up to each build strategy to consider or ignore a certain type of source, as well as to determine how it is to be used.

Binary and Git are mutually exclusive source types. Dockerfile and Image can be used by themselves, with each other, or together with either Git or Binary. Also, the Binary build source type is unique from the other options in how it is specified to the system.

Defining a BuildConfig

A build configuration describes a single build definition and a set of triggers for when a new build should be created.

A build configuration is defined by a BuildConfig, which is a REST object that can be used in a POST to the API server to create a new instance. The following example BuildConfig results in a new build every time a container image tag or the source code changes:

Example 1. BuildConfig Object Definition
kind: "BuildConfig"
apiVersion: "v1"
metadata:
  name: "ruby-sample-build" (1)
spec:
  triggers: (2)
    - type: "GitHub"
      github:
        secret: "secret101"
    - type: "Generic"
      generic:
        secret: "secret101"
    - type: "ImageChange"
  source: (3)
    type: "Git"
    git:
      uri: "https://github.com/openshift/ruby-hello-world"
    dockerfile: "FROM openshift/ruby-22-centos7\nUSER example"
  strategy: (4)
    type: "Source"
    sourceStrategy:
      from:
        kind: "ImageStreamTag"
        name: "ruby-20-centos7:latest"
  output: (5)
    to:
      kind: "ImageStreamTag"
      name: "origin-ruby-sample:latest"
  postCommit: (6)
      script: "bundle exec rake test"
1 This specification will create a new BuildConfig named ruby-sample-build.
2 You can specify a list of triggers, which cause a new build to be created.
3 The source section defines the source of the build. The type determines the primary source of input, and can be either Git, to point to a code repository location; Dockerfile, to build from an inline Dockerfile; or Binary, to accept binary payloads. Using multiple sources at once is possible. Refer to the documentation for each source type for details.
4 The strategy section describes the build strategy used to execute the build. You can specify Source, Docker and Custom strategies here. This above example uses the ruby-20-centos7 container image that Source-To-Image will use for the application build.
5 After the container image is successfully built, it will be pushed into the repository described in the output section.
6 The postCommit section defines an optional build hook.

Source-to-Image Strategy Options

The following options are specific to the S2I build strategy.

Force Pull

By default, if the builder image specified in the build configuration is available locally on the node, that image will be used. However, to override the local image and refresh it from the registry to which the image stream points, create a BuildConfig with the forcePull flag set to true:

strategy:
  type: "Source"
  sourceStrategy:
    from:
      kind: "ImageStreamTag"
      name: "builder-image:latest" (1)
    forcePull: true (2)
1 The builder image being used, where the local version on the node may not be up to date with the version in the registry to which the image stream points.
2 This flag causes the local builder image to be ignored and a fresh version to be pulled from the registry to which the image stream points. Setting forcePull to false results in the default behavior of honoring the image stored locally.

Incremental Builds

S2I can perform incremental builds, which means it reuses artifacts from previously-built images. To create an incremental build, create a BuildConfig with the following modification to the strategy definition:

strategy:
  type: "Source"
  sourceStrategy:
    from:
      kind: "ImageStreamTag"
      name: "incremental-image:latest" (1)
    incremental: true (2)
1 Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior.
2 This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script.

See the S2I Requirements topic for information on how to create a builder image supporting incremental builds.

Overriding Builder Image Scripts

You can override the assemble, run, and save-artifacts S2I scripts provided by the builder image in one of two ways. Either:

  1. Provide an assemble, run, and/or save-artifacts script in the .s2i/bin directory of your application source repository, or

  2. Provide a URL of a directory containing the scripts as part of the strategy definition. For example:

strategy:
  type: "Source"
  sourceStrategy:
    from:
      kind: "ImageStreamTag"
      name: "builder-image:latest"
    scripts: "http://somehost.com/scripts_directory" (1)
1 This path will have run, assemble, and save-artifacts appended to it. If any or all scripts are found they will be used in place of the same named script(s) provided in the image.

Files located at the scripts URL take precedence over files located in .s2i/bin of the source repository. See the S2I Requirements topic and the S2I documentation for information on how S2I scripts are used.

Environment Variables

There are two ways to make environment variables available to the source build process and resulting \image: environment files and BuildConfig environment values.

Environment Files

Source build enables you to set environment values (one per line) inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the final container image. The complete list of supported environment variables is available in the documentation for each image.

If you provide a .s2i/environment file in your source repository, S2I reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables.

For example, if you want to disable assets compilation for your Rails application, you can add DISABLE_ASSET_COMPILATION=true in the .s2i/environment file to cause assets compilation to be skipped during the build.

In addition to builds, the specified environment variables are also available in the running application itself. For example, you can add RAILS_ENV=development to the .s2i/environment file to cause the Rails application to start in development mode instead of production.

BuildConfig Environment

You can add environment variables to the sourceStrategy definition of the BuildConfig. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code.

For example disabling assets compilation for your Rails application:

sourceStrategy:
...
  env:
    - name: "DISABLE_ASSET_COMPILATION"
      value: "true"

Docker Strategy Options

The following options are specific to the Docker build strategy.

FROM Image

The FROM instruction of the Dockerfile will be replaced by the from of the BuildConfig:

strategy:
  type: Docker
  dockerStrategy:
    from:
      kind: "ImageStreamTag"
      name: "debian:latest"

Dockerfile Path

By default, Docker builds use a Dockerfile (named Dockerfile) located at the root of the context specified in the BuildConfig.spec.source.contextDir field.

The dockerfilePath field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir field. It can be simply a different file name other than the default Dockerfile (for example, MyDockerfile), or a path to a Dockerfile in a subdirectory (for example, dockerfiles/app1/Dockerfile):

strategy:
  type: Docker
  dockerStrategy:
    dockerfilePath: dockerfiles/app1/Dockerfile

No Cache

Docker builds normally reuse cached layers found on the host performing the build. Setting the noCache option to true forces the build to ignore cached layers and rerun all steps of the Dockerfile:

strategy:
  type: "Docker"
  dockerStrategy:
    noCache: true

Force Pull

By default, if the builder image specified in the build configuration is available locally on the node, that image will be used. However, to override the local image and refresh it from the registry to which the image stream points, create a BuildConfig with the forcePull flag set to true:

strategy:
  type: "Docker"
  dockerStrategy:
    forcePull: true (1)
1 This flag causes the local builder image to be ignored, and a fresh version to be pulled from the registry to which the image stream points. Setting forcePull to false results in the default behavior of honoring the image stored locally.

Environment Variables

To make environment variables available to the Docker build process and resulting image, you can add environment variables to the dockerStrategy definition of the BuildConfig.

The environment variables defined there are inserted as a single ENV Dockerfile instruction right after the FROM instruction, so that it can be referenced later on within the Dockerfile.

The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well.

For example, defining a custom HTTP proxy to be used during build and runtime:

dockerStrategy:
...
  env:
    - name: "HTTP_PROXY"
      value: "http://myproxy.net:5187/"

Cluster administrators can also configure global build settings using Ansible.

Custom Strategy Options

The following options are specific to the Custom build strategy.

FROM Image

Use the customStrategy.from section to indicate the image to use for the custom build:

strategy:
  type: "Custom"
  customStrategy:
    from:
      kind: "DockerImage"
      name: "openshift/sti-image-builder"

Exposing the Docker Socket

In order to allow the running of Docker commands and the building of container images from inside the container, the build container must be bound to an accessible socket. To do so, set the exposeDockerSocket option to true:

strategy:
  type: "Custom"
  customStrategy:
    exposeDockerSocket: true

secrets

In addition to secrets for source and images that can be added to all build types, custom strategies allow adding an arbitrary list of secrets to the builder pod.

Each secret can be mounted at a specific location:

strategy:
  type: "Custom"
  customStrategy:
    secrets:
      - secretSource: (1)
          name: "secret1"
        mountPath: "/tmp/secret1" (2)
      - secretSource:
          name: "secret2"
        mountPath: "/tmp/secret2"
1 secretSource is a reference to a secret in the same namespace as the build.
2 mountPath is the path inside the custom builder where the secret should be mounted.

Force Pull

By default, when setting up the build pod, the build controller checks if the image specified in the build configuration is available locally on the node. If so, that image will be used. However, to override the local image and refresh it from the registry to which the image stream points, create a BuildConfig with the forcePull flag set to true:

strategy:
  type: "Custom"
  customStrategy:
    forcePull: true (1)
1 This flag causes the local builder image to be ignored, and a fresh version to be pulled from the registry to which the image stream points. Setting forcePull to false results in the default behavior of honoring the image stored locally.

Environment Variables

To make environment variables available to the Custom build process, you can add environment variables to the customStrategy definition of the BuildConfig.

The environment variables defined there are passed to the pod that runs the custom build.

For example, defining a custom HTTP proxy to be used during build:

customStrategy:
...
  env:
    - name: "HTTP_PROXY"
      value: "http://myproxy.net:5187/"

Cluster administrators can also configure global build settings using Ansible.

Build Inputs

There are several ways to provide content for builds to operate on. In order of precedence:

  • Inline Dockerfile definitions

  • Content extracted from existing images

  • Git repositories

  • Binary inputs

These can be combined into a single build. As the inline Dockerfile takes precedence, it can overwrite any other file named Dockerfile provided by another input. Binary input and Git repository are mutually exclusive inputs.

When the build is run, a working directory is constructed and all input content is placed in the working directory (e.g. the input git repository is cloned into the working directory, files specified from input images are copied into the working directory using the target path). Next the build process will cd into the contextDir if one is defined. Then the inline Dockerfile (if any) is written to the current directory. Last, the content from the current directory will be provided to the build process for reference by the Dockerfile, assemble script, or custom builder logic. This means any input content that resides outside the contextDir will be ignored by the build.

Here is an example of a source definition that includes multiple input types and an explanation of how they are combined. For more details on how each input type is defined, see the specific sections for each input type.

source:
  git:
    uri: https://github.com/openshift/ruby-hello-world.git (1)
  images:
  - from:
      kind: ImageStreamTag
      name: myinputimage:latest
      namespace: mynamespace
    paths:
    - destinationDir: app/dir/injected/dir (2)
      sourcePath: /usr/lib/somefile.jar
  contextDir: "app/dir" (3)
  dockerfile: "FROM centos:7\nRUN yum install -y httpd" (4)
1 The repository to be cloned into the working directory for the build
2 /usr/lib/somefile.jar from myinputimage will be stored in <workingdir>/app/dir/injected/dir
3 The working dir for the build will become <original_workingdir>/app/dir
4 A Dockerfile with this content will be created in <original_workingdir>/app/dir, overwriting any existing file with that name

Git Repository Source Options

When the BuildConfig.spec.source.type is Git, a Git repository is required, and an inline Dockerfile is optional.

The source code is fetched from the location specified and, if the BuildConfig.spec.source.dockerfile field is specified, the inline Dockerfile replaces the one in the contextDir of the Git repository.

The source definition is part of the spec section in the BuildConfig:

source:
  type: "Git"
  git: (1)
    uri: "https://github.com/openshift/ruby-hello-world"
    ref: "master"
  contextDir: "app/dir" (2)
  dockerfile: "FROM openshift/ruby-22-centos7\nUSER example" (3)
1 The git field contains the URI to the remote Git repository of the source code. Optionally, specify the ref field to check out a specific Git reference. A valid ref can be a SHA1 tag or a branch name.
2 The contextDir field allows you to override the default location inside the source code repository where the build looks for the application source code. If your application exists inside a sub-directory, you can override the default location (the root folder) using this field.
3 If the optional dockerfile field is provided, it should be a string containing a Dockerfile that overwrites any Dockerfile that may exist in the source repository.

When using the Git repository as a source without specifying the ref field, OpenShift Enterprise performs a shallow clone (--depth=1 clone). That means only the HEAD (usually the master branch) is downloaded. This results in repositories downloading faster, including the commit history.

A shallow clone is also used when the ref field is specified and set to an existing remote branch name. However, if you specify the ref field to a specific commit, the system will fallback to a regular Git clone operation and checkout the commit, because using the --depth=1 option only works with named branch refs.

To perform a full Git clone of the master for the specified repository, set the ref to master.

Using a Proxy for Git Cloning

If your Git repository can only be accessed using a proxy, you can define the proxy to use in the source section of the BuildConfig. You can configure both a HTTP and HTTPS proxy to use. Both fields are optional.

Your source URI must use the HTTP or HTTPS protocol for this to work.

source:
  type: Git
  git:
    uri: "https://github.com/openshift/ruby-hello-world"
    httpProxy: http://proxy.example.com
    httpsProxy: https://proxy.example.com

Using Private Repositories for Builds

Supply valid credentials to build an application from a private repository.

Currently two types of authentication are supported: basic username-password and SSH key based authentication.

Basic Authentication

Basic authentication requires either a combination of username and password, or a token to authenticate against the SCM server. A CA certificate file, or a .gitconfig file can be attached.

A secret is used to store your keys.

  1. Create the secret first before using the username and password to access the private repository:

    $ oc secrets new-basicauth basicsecret --username=USERNAME --password=PASSWORD
    1. To create a Basic Authentication secret with a token:

      $ oc secrets new-basicauth basicsecret --password=TOKEN
    2. To create a Basic Authentication secret with a CA certificate file:

      $ oc secrets new-basicauth basicsecret --username=USERNAME --password=PASSWORD --ca-cert=FILENAME
    3. To create a Basic Authentication secret with a .gitconfig file:

      $ oc secrets new-basicauth basicsecret --username=USERNAME --password=PASSWORD --gitconfig=FILENAME
  2. Add the secret to the builder service account. Each build is run with serviceaccount/builder role, so you need to give it access your secret with following command:

    $ oc secrets add serviceaccount/builder secrets/basicsecret
  3. Add a sourcesecret field to the source section inside the BuildConfig and set it to the name of the secret that you created. In this case basicsecret:

    apiVersion: "v1"
    kind: "BuildConfig"
    metadata:
      name: "sample-build"
    spec:
      output:
        to:
          kind: "ImageStreamTag"
          name: "sample-image:latest"
      source:
        git:
          uri: "https://github.com/user/app.git" (1)
        sourcesecret:
          name: "basicsecret"
        type: "Git"
      strategy:
        sourceStrategy:
          from:
            kind: "ImageStreamTag"
            name: "python-33-centos7:latest"
        type: "Source"
    1 The URL of private repository, accessed by basic authentication, is usually in the http or https form.

SSH Key Based Authentication

SSH Key Based Authentication requires a private SSH key. A .gitconfig file can also be attached.

The repository keys are usually located in the $HOME/.ssh/ directory, and are named id_dsa.pub, id_ecdsa.pub, id_ed25519.pub, or id_rsa.pub by default. Generate SSH key credentials with the following command:

$ ssh-keygen -t rsa -C "your_email@example.com"

Creating a passphrase for the SSH key prevents OpenShift Enterprise from building. When prompted for a passphrase, leave it blank.

Two files are created: the public key and a corresponding private key (one of id_dsa, id_ecdsa, id_ed25519, or id_rsa). With both of these in place, consult your source control management (SCM) system’s manual on how to upload the public key. The private key will be used to access your private repository.

A secret is used to store your keys.

  1. Create the secret first before using the SSH key to access the private repository:

    $ oc secrets new-sshauth sshsecret --ssh-privatekey=$HOME/.ssh/id_rsa
    1. To create a SSH Based Authentication secret with a .gitconfig file:

      $ oc secrets new-sshauth sshsecret --ssh-privatekey=$HOME/.ssh/id_rsa --gitconfig=FILENAME
  2. Add the secret to the builder service account. Each build is run with serviceaccount/builder role, so you need to give it access your secret with following command:

    $ oc secrets add serviceaccount/builder secrets/sshsecret
  3. Add a sourcesecret field into the source section inside the BuildConfig and set it to the name of the secret that you created. In this case sshsecret:

    apiVersion: "v1"
    kind: "BuildConfig"
    metadata:
      name: "sample-build"
    spec:
      output:
        to:
          kind: "ImageStreamTag"
          name: "sample-image:latest"
      source:
        git:
          uri: "git@repository.com:user/app.git" (1)
        sourcesecret:
          name: "sshsecret"
        type: "Git"
      strategy:
        sourceStrategy:
          from:
            kind: "ImageStreamTag"
            name: "python-33-centos7:latest"
        type: "Source"
    1 The URL of private repository, accessed by a private SSH key, is usually in the form git@example.com:<username>/<repository>.git.

Other

If the cloning of your application is dependent on a CA certificate, .gitconfig file, or both, then you can create a secret that contains them, add it to the builder service account, and then your BuildConfig.

  1. Create desired type of secret:

    1. To create a secret from a .gitconfig:

      $ oc secrets new mysecret .gitconfig=path/to/.gitconfig
    2. To create a secret from a CA certificate:

      $ oc secrets new mysecret ca.crt=path/to/certificate
    3. To create a secret from a CA certificate and .gitconfig:

      $ oc secrets new mysecret ca.crt=path/to/certificate .gitconfig=path/to/.gitconfig

      SSL verification can be turned off, if sslVerify=false is set for the http section in your .gitconfig file:

      [http]
              sslVerify=false
  2. Add the secret to the builder service account. Each build is run with the serviceaccount/builder role, so you need to give it access your secret with following command:

    $ oc secrets add serviceaccount/builder secrets/mysecret
  3. Add the secret to the BuildConfig:

    source:
      git:
        uri: "https://github.com/sclorg/nodejs-ex.git"
      sourcesecret:
        name: "mysecret"

Defining secrets in the BuildConfig provides more information on this topic.

Dockerfile Source

When the BuildConfig.spec.source.type is Dockerfile, an inline Dockerfile is used as the build input, and no additional sources can be provided.

This source type is valid when the build strategy type is Docker or Custom.

The source definition is part of the spec section in the BuildConfig:

source:
  type: "Dockerfile"
  dockerfile: "FROM centos:7\nRUN yum install -y httpd" (1)
1 The dockerfile field contains an inline Dockerfile that will be built.

Binary Source

Streaming content in binary format from a local file system to the builder is called a binary type build. The corresponding value of BuildConfig.spec.source.type is Binary for such builds.

This source type is unique in that it is leveraged solely based on your use of the oc start-build.

Binary type builds require content to be streamed from the local file system, so automatically triggering a binary type build (e.g. via an image change trigger) is not possible, because the binary files cannot be provided. Similarly, you cannot launch binary type builds from the web console.

To utilize binary builds, invoke oc start-build with one of these options:

  • --from-file: The contents of the file you specify are sent as a binary stream to the builder. The builder then stores the data in a file with the same name at the top of the build context.

  • --from-dir and --from-repo: The contents are archived and sent as a binary stream to the builder. The builder then extracts the contents of the archive within the build context directory.

In each of the above cases:

  • If your BuildConfig already has a Binary source type defined, it will effectively be ignored and replaced by what the client sends.

  • If your BuildConfig has a Git source type defined, it is dynamically disabled, since Binary and Git are mutually exclusive, and the data in the binary stream provided to the builder takes precedence.

When using oc new-build --binary=true, the command ensures that the restrictions associated with binary builds are enforced. The resulting BuildConfig will have a source type of Binary, meaning that the only valid way to run a build for this BuildConfig is to use oc start-build with one of the --from options to provide the requisite binary data.

The dockerfile and contextDir source options have special meaning with binary builds.

dockerfile can be used with any binary build source. If dockerfile is used and the binary stream is an archive, its contents serve as a replacement Dockerfile to any Dockerfile in the archive. If dockerfile is used with the --from-file argument, and the file argument is named dockerfile, the value from dockerfile replaces the value from the binary stream.

In the case of the binary stream encapsulating extracted archive content, the value of the contextDir field is interpreted as a subdirectory within the archive, and, if valid, the builder changes into that subdirectory before executing the build.

Image Source

Additional files can be provided to the build process via images. Input images are referenced in the same way the From and To image targets are defined. This means both container images and image stream tags can be referenced. In conjunction with the image, you must provide one or more path pairs to indicate the path of the files/directories to copy out of the image and the destination to place them in the build context.

The source path can be any absolute path within the image specified. The destination must be a relative directory path. At build time, the image will be loaded and the indicated files and directories will be copied into the context directory of the build process. This is the same directory into which the source repository content (if any) is cloned. If the source path ends in /. then the content of the directory will be copied, but the directory itself will not be created at the destination.

Image inputs are specified in the source definition of the BuildConfig:

source:
  git:
    uri: https://github.com/openshift/ruby-hello-world.git
  images: (1)
  - from: (2)
      kind: ImageStreamTag
      name: myinputimage:latest
      namespace: mynamespace
    paths: (3)
    - destinationDir: injected/dir (4)
      sourcePath: /usr/lib/somefile.jar (5)
  - from:
      kind: ImageStreamTag
      name: myotherinputimage:latest
      namespace: myothernamespace
    pullsecret: mysecret (6)
    paths:
    - destinationDir: injected/dir
      sourcePath: /usr/lib/somefile.jar
1 An array of one or more input images and files.
2 A reference to the image containing the files to be copied.
3 An array of source/destination paths.
4 The directory relative to the build root where the build process can access the file.
5 The location of the file to be copied out of the referenced image.
6 An optional secret provided if credentials are needed to access the input image.

This feature is not supported for builds using the Custom Strategy.

Using secrets During a Build

In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build.

For example, when building a NodeJS application, you can set up your private mirror for NodeJS modules. In order to download modules from that private mirror, you have to supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image.

This example describes NodeJS, but you can use the same approach for adding SSL certificates into the /etc/ssl/certs directory, API keys or tokens, license files, etc.

Defining secrets in the BuildConfig

  1. Create the secret:

    $ oc secrets new secret-npmrc .npmrc=~/.npmrc

    This creates a new secret named secret-npmrc, which contains the base64 encoded content of the ~/.npmrc file.

  2. Add the secret to the source section in the existing build configuration:

    source:
      git:
        uri: https://github.com/sclorg/nodejs-ex.git
      secrets:
        - secret:
            name: secret-npmrc
      type: Git

    To include the secrets in a new build configuration, run the following command:

    $ oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc

    During the build, the .npmrc file is copied into the directory where the source code is located. In case of the OpenShift Enterprise S2I builder images, this is the image working directory, which is set using the WORKDIR instruction in the Dockerfile. If you want to specify another directory, add a destinationDir to the secret definition:

    source:
      git:
        uri: https://github.com/sclorg/nodejs-ex.git
      secrets:
        - secret:
            name: secret-npmrc
          destinationDir: /etc
      type: Git

    You can also specify the destination directory when creating a new build configuration:

    $ oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secretsecret-npmrc:/etc”

    In both cases, the .npmrc file is added to the /etc directory of the build environment. Note that for a Docker strategy the destination directory must be a relative path.

Source-to-Image Strategy

When using a Source strategy, all defined source secrets are copied to their respective destinationDir. If you left destinationDir empty, then the secrets are placed in the working directory of the builder image. The same rule is used when a destinationDir is a relative path; the secrets are placed in the paths that are relative to the image’s working directory. The destinationDir must exist or an error will occur. No directory paths are created during the copy process.

Currently, any files with these secrets are world-writable (have 0666 permissions) and will be truncated to size zero after executing the assemble script. This means that the secret files will exist in the resulting image, but they will be empty for security reasons.

Docker Strategy

When using a Docker strategy, you can add all defined source secrets into your container image using the ADD and COPY instructions in your Dockerfile. If you do not specify the destinationDir for a secret, then the files will be copied into the same directory in which the Dockerfile is located. If you specify a relative path as destinationDir, then the secrets will be copied into that directory, relative to your Dockerfile location. This makes the secret files available to the Docker build operation as part of the context directory used during the build.

Users should always remove their secrets from the final application image so that the secrets are not present in the container running from that image. However, the secrets will still exist in the image itself in the layer where they were added. This removal should be part of the Dockerfile itself.

Custom Strategy

When using a Custom strategy, then all the defined source secrets are available inside the builder container in the /var/run/secrets/openshift.io/build directory. The custom build image is responsible for using these secrets appropriately. The Custom strategy also allows secrets to be defined as described in secrets. There is no technical difference between existing strategy secrets and the source secrets. However, your builder image might distinguish between them and use them differently, based on your build use case. The source secrets are always mounted into the /var/run/secrets/openshift.io/build directory or your builder can parse the $BUILD environment variable, which includes the full build object.

Starting a Build

Manually start a new build from an existing build configuration in your current project using the following command:

$ oc start-build <buildconfig_name>

Re-run a build using the --from-build flag:

$ oc start-build --from-build=<build_name>

Specify the --follow flag to stream the build’s logs in stdout:

$ oc start-build <buildconfig_name> --follow

Specify the --env flag to set any desired environment variable for the build:

$ oc start-build <buildconfig_name> --env=<key>=<value>

Rather than relying on a Git source pull or a Dockerfile for a build, you can can also start a build by directly pushing your source, which could be the contents of a Git or SVN working directory, a set of prebuilt binary artifacts you want to deploy, or a single file. This can be done by specifying one of the following options for the start-build command:

Option Description

--from-dir=<directory>

Specifies a directory that will be archived and used as a binary input for the build.

--from-file=<file>

Specifies a single file that will be the only file in the build source. The file is placed in the root of an empty directory with the same file name as the original file provided.

--from-repo=<local_source_repo>

Specifies a path to a local repository to use as the binary input for a build. Add the --commit option to control which branch, tag, or commit is used for the build.

When passing any of these options directly to the build, the contents are streamed to the build and override the current build source settings.

Builds triggered from binary input will not preserve the source on the server, so rebuilds triggered by base image changes will use the source specified in the build configuration.

For example, the following command sends the contents of a local Git repository as an archive from the tag v2 and starts a build:

$ oc start-build hello-world --from-repo=../hello-world --commit=v2

Canceling a Build

Manually cancel a build using the web console, or with the following CLI command:

$ oc cancel-build <build_name>

Deleting a BuildConfig

Delete a BuildConfig using the following command:

$ oc delete bc <BuildConfigName>

This will also delete all builds that were instantiated from this BuildConfig. Specify the --cascade=false flag if you do not want to delete the builds:

$ oc delete --cascade=false bc <BuildConfigName>

Viewing Build Details

You can view build details using the web console or the following CLI command:

$ oc describe build <build_name>

The output of the describe command includes details such as the build source, strategy, and output destination. If the build uses the Docker or Source strategy, it will also include information about the source revision used for the build: commit ID, author, committer, and message.

Accessing Build Logs

You can access build logs using the web console or the CLI.

To stream the logs using the build directly:

$ oc logs -f build/<build_name>

To stream the logs of the latest build for a build configuration:

$ oc logs -f bc/<buildconfig_name>

To return the logs of a given version build for a build configuration:

$ oc logs --version=<number> bc/<buildconfig_name>

Log Verbosity

To enable more verbose output, pass the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig:

sourceStrategy:
...
  env:
    - name: "BUILD_LOGLEVEL"
      value: "2" (1)
1 Adjust this value to the desired log level.
A platform administrator can set verbosity for the entire OpenShift Enterprise instance by passing the --loglevel option to the openshift start command. If both --loglevel and BUILD_LOGLEVEL are specified, BUILD_LOGLEVEL takes precedence.

Available log levels for Source builds are as follows:

Level 0

Produces output from containers running the assemble script and all encountered errors. This is the default.

Level 1

Produces basic information about the executed process.

Level 2

Produces very detailed information about the executed process.

Level 3

Produces very detailed information about the executed process, and a listing of the archive contents.

Level 4

Currently produces the same information as level 3.

Level 5

Produces everything mentioned on previous levels and additionally provides docker push messages.

Setting Maximum Duration

When defining a BuildConfig, you can define its maximum duration by setting the completionDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced.

The maximum duration is counted from the time when a build pod gets scheduled in the system, and defines how long it can be active, including the time needed to pull the builder image. After reaching the specified timeout, the build is terminated by OpenShift Enterprise.

The following example shows the part of a BuildConfig specifying completionDeadlineSeconds field for 30 minutes:

spec:
  completionDeadlineSeconds: 1800

Build Triggers

When defining a BuildConfig, you can define triggers to control the circumstances in which the BuildConfig should be run. The following build triggers are available:

Webhook Triggers

Webhook triggers allow you to trigger a new build by sending a request to the OpenShift Enterprise API endpoint. You can define these triggers using GitHub webhooks or Generic webhooks.

GitHub Webhooks

GitHub webhooks handle the call made by GitHub when a repository is updated. When defining the trigger, you must specify a secret, which will be part of the URL you supply to GitHub when configuring the webhook. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The following example is a trigger definition YAML within the BuildConfig:

type: "GitHub"
github:
  secret: "secret101"

The secret field in webhook trigger configuration is not the same as secret field you encounter when configuring webhook in GitHub UI. The former is to make the webhook URL unique and hard to predict, the latter is an optional string field used to create HMAC hex digest of the body, which is sent as an X-Hub-Signature header.

The payload URL is returned as the GitHub Webhook URL by the describe command (see below), and is structured as follows:

http://<openshift_api_host:port>/oapi/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github

To configure a GitHub Webhook:

  1. Describe the build configuration to get the webhook URL:

    $ oc describe bc <name>
  2. Copy the webhook URL.

  3. Follow the GitHub setup instructions to paste the webhook URL into your GitHub repository settings.

Gogs supports the same webhook payload format as GitHub. Therefore, if you are using a Gogs server, you can define a GitHub webhook trigger on your BuildConfig and trigger it via your Gogs server also.

Generic Webhooks

Generic webhooks can be invoked from any system capable of making a web request. As with a GitHub webhook, you must specify a secret which will be part of the URL, the caller must use to trigger the build. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The following is an example trigger definition YAML within the BuildConfig:

type: "Generic"
generic:
  secret: "secret101"

To set up the caller, supply the calling system with the URL of the generic webhook endpoint for your build:

http://<openshift_api_host:port>/oapi/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic

The endpoint can accept an optional payload with the following format:

type: "git"
git:
  uri: "<url to git repository>"
  ref: "<optional git reference>"
  commit: "<commit hash identifying a specific git commit>"
  author:
    name: "<author name>"
    email: "<author e-mail>"
  committer:
    name: "<committer name>"
    email: "<committer e-mail>"
  message: "<commit message>"

Displaying a BuildConfig’s Webhook URLs

Use the following command to display the webhook URLs associated with a build configuration:

$ oc describe bc <name>

If the above command does not display any webhook URLs, then no webhook trigger is defined for that build configuration.

Image Change Triggers

Image change triggers allow your build to be automatically invoked when a new version of an upstream image is available. For example, if a build is based on top of a RHEL image, then you can trigger that build to run any time the RHEL image changes. As a result, the application image is always running on the latest RHEL base image.

Configuring an image change trigger requires the following actions:

  1. Define an ImageStream that points to the upstream image you want to trigger on:

    kind: "ImageStream"
    apiVersion: "v1"
    metadata:
      name: "ruby-20-centos7"

    This defines the image stream that is tied to a container image repository located at <system-registry>/<namespace>/ruby-20-centos7. The <system-registry> is defined as a service with the name docker-registry running in OpenShift Enterprise.

  2. If an image stream is the base image for the build, set the from field in the build strategy to point to the image stream:

    strategy:
      type: "Source"
      sourceStrategy:
        from:
          kind: "ImageStreamTag"
          name: "ruby-20-centos7:latest"

    In this case, the sourceStrategy definition is consuming the latest tag of the image stream named ruby-20-centos7 located within this namespace.

  3. Define a build with one or more triggers that point to image streams:

    type: "imageChange" (1)
    imageChange: {}
    type: "imagechange" (2)
    imageChange:
      from:
        kind: "ImageStreamTag"
        name: "custom-image:latest"
    1 An image change trigger that monitors the ImageStream and Tag as defined by the build strategy’s from field. The imageChange object here must be empty.
    2 An image change trigger that monitors an arbitrary image stream. The imageChange part in this case must include a from field that references the ImageStreamTag to monitor.

When using an image change trigger for the strategy image stream, the generated build is supplied with an immutable Docker tag that points to the latest image corresponding to that tag. This new image reference will be used by the strategy when it executes for the build. For other image change triggers that do not reference the strategy image stream, a new build will be started, but the build strategy will not be updated with a unique image reference.

In the example above that has an image change trigger for the strategy, the resulting build will be:

strategy:
  type: "Source"
  sourceStrategy:
    from:
      kind: "DockerImage"
      name: "172.30.17.3:5001/mynamespace/ruby-20-centos7:immutableid"

This ensures that the triggered build uses the new image that was just pushed to the repository, and the build can be re-run any time with the same inputs.

In addition to setting the image field for all Strategy types, for custom builds, the OPENSHIFT_CUSTOM_BUILD_BASE_IMAGE environment variable is checked. If it does not exist, then it is created with the immutable image reference. If it does exist then it is updated with the immutable image reference.

If a build is triggered due to a webhook trigger or manual request, the build that is created uses the immutableid resolved from the ImageStream referenced by the Strategy. This ensures that builds are performed using consistent image tags for ease of reproduction.

Image streams that point to container images in v1 Docker registries only trigger a build once when the image stream tag becomes available and not on subsequent image updates. This is due to the lack of uniquely identifiable images in v1 Docker registries.

Configuration Change Triggers

A configuration change trigger allows a build to be automatically invoked as soon as a new BuildConfig is created. The following is an example trigger definition YAML within the BuildConfig:

  type: "ConfigChange"

Configuration change triggers currently only work when creating a new BuildConfig. In a future release, configuration change triggers will also be able to launch a build whenever a BuildConfig is updated.

Build Hooks

Build hooks allow behavior to be injected into the build process.

Use the postCommit field to execute commands inside a temporary container that is running the build output image. The hook is executed immediately after the last layer of the image has been committed and before the image is pushed to a registry.

The current working directory is set to the image’s WORKDIR, which is the default working directory of the container image. For most images, this is where the source code is located.

The hook fails if the script or command returns a non-zero exit code or if starting the temporary container fails. When the hook fails it marks the build as failed and the image is not pushed to a registry. The reason for failing can be inspected by looking at the build logs.

Build hooks can be used to run unit tests to verify the image before the build is marked complete and the image is made available in a registry. If all tests pass and the test runner returns with exit code 0, the build is marked successful. In case of any test failure, the build is marked as failed. In all cases, the build log will contain the output of the test runner, which can be used to identify failed tests.

The postCommit hook is not only limited to running tests, but can be used for other commands as well. Since it runs in a temporary container, changes made by the hook do not persist, meaning that the hook execution cannot affect the final image. This behavior allows for, among other uses, the installation and usage of test dependencies that are automatically discarded and will be not present in the final image.

There are different ways to configure the post build hook. All forms in the following examples are equivalent and execute bundle exec rake test --verbose:

  • Shell script:

    postCommit:
      script: "bundle exec rake test --verbose"

    The script value is a shell script to be run with /bin/sh -ic. Use this when a shell script is appropriate to execute the build hook. For example, for running unit tests as above. To control the image entry point, or if the image does not have /bin/sh, use command and/or args.

    The additional -i flag was introduced to improve the experience working with CentOS and RHEL images, and may be removed in a future release.

  • Command as the image entry point:

    postCommit:
      command: ["/bin/bash", "-c", "bundle exec rake test --verbose"]

    In this form, command is the command to run, which overrides the image entry point in the exec form, as documented in the Dockerfile reference. This is needed if the image does not have /bin/sh, or if you do not want to use a shell. In all other cases, using script might be more convenient.

  • Pass arguments to the default entry point:

    postCommit:
      args: ["bundle", "exec", "rake", "test", "--verbose"]

    In this form, args is a list of arguments that are provided to the default entry point of the image. The image entry point must be able to handle arguments.

  • Shell script with arguments:

    postCommit:
      script: "bundle exec rake test $1"
      args: ["--verbose"]

    Use this form if you need to pass arguments that would otherwise be hard to quote properly in the shell script. In the script, $0 will be "/bin/sh" and $1, $2, etc, are the positional arguments from args.

  • Command with arguments:

    postCommit:
      command: ["bundle", "exec", "rake", "test"]
      args: ["--verbose"]

    This form is equivalent to appending the arguments to command.

Providing both script and command simultaneously creates an invalid build hook.

Using the Command Line

The oc set build-hook command can be used to set the build hook for a build configuration.

To set a command as the post-commit build hook:

$ oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose

To set a script as the post-commit build hook:

$ oc set build-hook bc/mybc --post-commit --script="bundle exec rake test --verbose"

Using Docker Credentials for Pushing and Pulling Images

Supply the .dockercfg file with valid Docker Registry credentials in order to push the output image into a private Docker Registry or pull the builder image from the private Docker Registry that requires authentication. For the OpenShift Enterprise Docker Registry, you don’t have to do this because secrets are generated automatically for you by OpenShift Enterprise.

The .dockercfg JSON file is found in your home directory by default and has the following format:

auths:
  https://index.docker.io/v1/: (1)
    auth: "YWRfbGzhcGU6R2labnRib21ifTE=" (2)
    email: "user@example.com" (3)
1 URL of the registry.
2 Encrypted password.
3 Email address for the login.

You can define multiple Docker registry entries in this file. Alternatively, you can also add authentication entries to this file by running the docker login command. The file will be created if it does not exist. Kubernetes provides secret objects, which are used to store your configuration and passwords.

  1. Create the secret from your local .dockercfg file:

    $ oc secrets new dockerhub ~/.dockercfg

    This generates a JSON specification of the secret named dockerhub and creates the object.

  2. Once the secret is created, add it to the builder service account. Each build is run with serviceaccount/builder role, so you need to give it access your secret with following command:

    $ oc secrets add serviceaccount/builder secrets/dockerhub
  3. Add a pushsecret field into the output section of the BuildConfig and set it to the name of the secret that you created, which in the above example is dockerhub:

    spec:
      output:
        to:
          kind: "DockerImage"
          name: "private.registry.com/org/private-image:latest"
        pushsecret:
          name: "dockerhub"
  4. Pull the builder container image from a private Docker registry by specifying the pullsecret field, which is part of the build strategy definition:

    strategy:
      sourceStrategy:
        from:
          kind: "DockerImage"
          name: "docker.io/user/private_repository"
        pullsecret:
          name: "dockerhub"
      type: "Source"

This example uses pullsecret in a Source build, but it is also applicable in Docker and Custom builds.

Build Run Policy

The build run policy describes the order in which the builds created from the build configuration should run. This can be done by changing the value of the runPolicy field in the spec section of the Build specification.

It is also possible to change the runPolicy value for existing build configurations.

  • Changing Parallel to Serial or SerialLatestOnly and triggering a new build from this configuration will cause the new build to wait until all parallel builds complete as the serial build can only run alone.

  • Changing Serial to SerialLatestOnly and triggering a new build will cause cancellation of all existing builds in queue, except the currently running build and the most recently created build. The newest build will execute next.

Serial Run Policy

Setting the runPolicy field to Serial will cause all new builds created from the Build configuration to be run sequentially. That means there will be only one build running at a time and every new build will wait until the previous build completes. Using this policy will result in consistent and predictable build output. This is the default runPolicy.

Triggering three builds from the sample-build configuration, using the Serial policy will result in:

NAME             TYPE      FROM          STATUS    STARTED          DURATION
sample-build-1   Source    Git@e79d887   Running   13 seconds ago   13s
sample-build-2   Source    Git           New
sample-build-3   Source    Git           New

When the sample-build-1 build completes, the sample-build-2 build will run:

NAME             TYPE      FROM          STATUS    STARTED          DURATION
sample-build-1   Source    Git@e79d887   Completed 43 seconds ago   34s
sample-build-2   Source    Git@1aa381b   Running   2 seconds ago    2s
sample-build-3   Source    Git           New

SerialLatestOnly Run Policy

Setting the runPolicy field to SerialLatestOnly will cause all new builds created from the Build configuration to be run sequentially, same as using the Serial run policy. The difference is that when a currently running build completes, the next build that will run is the latest build created. In other words, you do not wait for the queued builds to run, as they are skipped. Skipped builds are marked as Cancelled. This policy can be used for fast, iterative development.

Triggering three builds from the sample-build configuration, using the SerialLatestOnly policy will result in:

NAME             TYPE      FROM          STATUS    STARTED          DURATION
sample-build-1   Source    Git@e79d887   Running   13 seconds ago   13s
sample-build-2   Source    Git           Cancelled
sample-build-3   Source    Git           New

The sample-build-2 build will be canceled (skipped) and the next build run after sample-build-1 completes will be the sample-build-3 build:

NAME             TYPE      FROM          STATUS    STARTED          DURATION
sample-build-1   Source    Git@e79d887   Completed 43 seconds ago   34s
sample-build-2   Source    Git           Cancelled
sample-build-3   Source    Git@1aa381b   Running   2 seconds ago    2s

Parallel Run Policy

Setting the runPolicy field to Parallel causes all new builds created from the Build configuration to be run in parallel. This can produce unpredictable results, as the first created build can complete last, which will replace the pushed container image produced by the last build which completed earlier.

Use the parallel run policy in cases where you do not care about the order in which the builds will complete.

Triggering three builds from the sample-build configuration, using the Parallel policy will result in three simultaneous builds:

NAME             TYPE      FROM          STATUS    STARTED          DURATION
sample-build-1   Source    Git@e79d887   Running   13 seconds ago   13s
sample-build-2   Source    Git@a76d881   Running   15 seconds ago   3s
sample-build-3   Source    Git@689d111   Running   17 seconds ago   3s

The completion order is not guaranteed:

NAME             TYPE      FROM          STATUS    STARTED          DURATION
sample-build-1   Source    Git@e79d887   Running   13 seconds ago   13s
sample-build-2   Source    Git@a76d881   Running   15 seconds ago   3s
sample-build-3   Source    Git@689d111   Completed 17 seconds ago   5s

Build Output

Docker and Source builds result in the creation of a new container image. The image is then pushed to the registry specified in the output section of the Build specification.

If the output kind is ImageStreamTag, then the image will be pushed to the integrated OpenShift Enterprise registry and tagged in the specified image stream. If the output is of type DockerImage, then the name of the output reference will be used as a Docker push specification. The specification may contain a registry or will default to DockerHub if no registry is specified. If the output section of the build specification is empty, then the image will not be pushed at the end of the build.

Example 2. Output to an ImageStreamTag
output:
  to:
    kind: "ImageStreamTag"
    name: "sample-image:latest"
Example 3. Output to a Docker Push Specification
output:
  to:
    kind: "DockerImage"
    name: "my-registry.mycompany.com:5000/myimages/myimage:tag"

Output Image Environment Variables

Docker and Source builds set the following environment variables on output images:

Variable Description

OPENSHIFT_BUILD_NAME

Name of the build

OPENSHIFT_BUILD_NAMESPACE

Namespace of the build

OPENSHIFT_BUILD_SOURCE

The source URL of the build

OPENSHIFT_BUILD_REFERENCE

The Git reference used in the build

OPENSHIFT_BUILD_COMMIT

Source commit used in the build

Output Image Labels

Docker and Source builds set the following labels on output images:

Label Description

io.openshift.build.commit.author

Author of the source commit used in the build

io.openshift.build.commit.date

Date of the source commit used in the build

io.openshift.build.commit.id

Hash of the source commit used in the build

io.openshift.build.commit.message

Message of the source commit used in the build

io.openshift.build.commit.ref

Branch or reference specified in the source

io.openshift.build.source-location

Source URL for the build

Using External Artifacts During a Build

It is not recommended to store binary files in a source repository. Therefore, you may find it necessary to define a build which pulls additional files (such as Java .jar dependencies) during the build process. How this is done depends on the build strategy you are using.

For a Source build strategy, you must put appropriate shell commands into the assemble script:

Example 4. .s2i/bin/assemble File
#!/bin/sh
APP_VERSION=1.0
wget http://repository.example.com/app/app-$APP_VERSION.jar -O app.jar
Example 5. .s2i/bin/run File
#!/bin/sh
exec java -jar app.jar

For more information on how to control which assemble and run script is used by a Source build, see Overriding Builder Image Scripts.

For a Docker build strategy, you must modify the Dockerfile and invoke shell commands with the RUN instruction:

Example 6. Excerpt of Dockerfile
FROM jboss/base-jdk:8

ENV APP_VERSION 1.0
RUN wget http://repository.example.com/app/app-$APP_VERSION.jar -O app.jar

EXPOSE 8080
CMD [ "java", "-jar", "app.jar" ]

In practice, you may want to use an environment variable for the file location so that the specific file to be downloaded can be customized using an environment variable defined on the BuildConfig, rather than updating the assemble script or Dockerfile.

You can choose between different methods of defining environment variables:

Build Resources

By default, builds are completed by pods using unbound resources, such as memory and CPU. These resources can be limited by specifying resource limits in a project’s default container limits.

You can also limit resource use by specifying resource limits as part of the build configuration. In the following example, each of the resources, cpu, and memory parameters are optional:

apiVersion: "v1"
kind: "BuildConfig"
metadata:
  name: "sample-build"
spec:
  resources:
    limits:
      cpu: "100m" (1)
      memory: "256Mi" (2)
1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3).
2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20).

However, if a quota has been defined for your project, one of the following two items is required:

  • A resources section set with an explicit requests:

    resources:
      requests: (1)
        cpu: "100m"
        memory: "256Mi"
    1 The requests object contains the list of resources that correspond to the list of resources in the quota.
  • A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the build process.

Otherwise, build pod creation will fail, citing a failure to satisfy quota.

Troubleshooting

Table 1. Troubleshooting Guidance for Builds
Issue Resolution

A build fails with:

requested access to the resource is denied

You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use:

$ oc describe quota