strategy:
dockerStrategy:
from:
kind: "ImageStreamTag"
name: "debian:latest"
The following sections define the primary supported build strategies, and how to use them.
OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation.
If you set Docker build arguments by using the |
You can replace the FROM
instruction of the Dockerfile with the from
of the BuildConfig
object. If the Dockerfile uses multi-stage builds, the image in the last FROM
instruction will be replaced.
To replace the FROM
instruction of the Dockerfile with the from
of the BuildConfig
.
strategy:
dockerStrategy:
from:
kind: "ImageStreamTag"
name: "debian:latest"
By default, docker builds use a Dockerfile located at the root of the context specified in the BuildConfig.spec.source.contextDir
field.
The dockerfilePath
field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir
field. It can be a different file name than the default Dockerfile, such as MyDockerfile
, or a path to a Dockerfile in a subdirectory, such as dockerfiles/app1/Dockerfile
.
To use the dockerfilePath
field for the build to use a different path to locate your Dockerfile, set:
strategy:
dockerStrategy:
dockerfilePath: dockerfiles/app1/Dockerfile
To make environment variables available to the docker build process and resulting image, you can add environment variables to the dockerStrategy
definition of the build configuration.
The environment variables defined there are inserted as a single ENV
Dockerfile instruction right after the FROM
instruction, so that it can be referenced later on within the Dockerfile.
The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well.
For example, defining a custom HTTP proxy to be used during build and runtime:
dockerStrategy:
...
env:
- name: "HTTP_PROXY"
value: "http://myproxy.net:5187/"
You can also manage environment variables defined in the build configuration with the oc set env
command.
You can set docker build arguments using the buildArgs
array. The build arguments are passed to docker when a build is started.
See Understand how ARG and FROM interact in the Dockerfile reference documentation. |
To set docker build arguments, add entries to the buildArgs
array, which is located in the dockerStrategy
definition of the BuildConfig
object. For example:
dockerStrategy:
...
buildArgs:
- name: "foo"
value: "bar"
Only the |
Docker builds normally create a layer representing each instruction in a Dockerfile. Setting the imageOptimizationPolicy
to SkipLayers
merges all instructions into a single layer on top of the base image.
Set the imageOptimizationPolicy
to SkipLayers
:
strategy:
dockerStrategy:
imageOptimizationPolicy: SkipLayers
You can mount build volumes to give running builds access to information that you don’t want to persist in the output container image.
Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image.
The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts.
In the dockerStrategy
definition of the BuildConfig
object, add any build volumes to the volumes
array. For example:
spec:
dockerStrategy:
volumes:
- name: secret-mvn (1)
mounts:
- destinationPath: /opt/app-root/src/.ssh (2)
source:
type: Secret (3)
secret:
secretName: my-secret (4)
- name: settings-mvn (1)
mounts:
- destinationPath: /opt/app-root/src/.m2 (2)
source:
type: ConfigMap (3)
configMap:
name: my-config (4)
- name: my-csi-volume (1)
mounts:
- destinationPath: /opt/app-root/src/some_path (2)
source:
type: CSI (3)
csi:
driver: csi.sharedresource.openshift.io (5)
readOnly: true (6)
volumeAttributes: (7)
attribute: value
1 | Required. A unique name. |
2 | Required. The absolute path of the mount point. It must not contain .. or : and doesn’t collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. |
3 | Required. The type of source, ConfigMap , Secret , or CSI . |
4 | Required. The name of the source. |
5 | Required. The driver that provides the ephemeral CSI volume. |
6 | Optional. If true, this instructs the driver to provide a read-only volume. |
7 | Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver’s documentation for supported attribute keys and values. |
The Shared Resource CSI Driver is supported as a Technology Preview feature. |
Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run
command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on.
Source-to-image (S2I) can perform incremental builds, which means it reuses artifacts from previously-built images.
To create an incremental build, create a with the following modification to the strategy definition:
strategy:
sourceStrategy:
from:
kind: "ImageStreamTag"
name: "incremental-image:latest" (1)
incremental: true (2)
1 | Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior. |
2 | This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script. |
See S2I Requirements for information on how to create a builder image supporting incremental builds.
You can override the assemble
, run
, and save-artifacts
source-to-image (S2I) scripts provided by the builder image.
To override the assemble
, run
, and save-artifacts
S2I scripts provided by the builder image, either:
Provide an assemble
, run
, or save-artifacts
script in the .s2i/bin
directory of your application source repository.
Provide a URL of a directory containing the scripts as part of the strategy definition. For example:
strategy:
sourceStrategy:
from:
kind: "ImageStreamTag"
name: "builder-image:latest"
scripts: "http://somehost.com/scripts_directory" (1)
1 | This path will have run , assemble , and save-artifacts appended to it. If any or all scripts are found they will be used in place of the same named scripts provided in the image. |
Files located at the |
There are two ways to make environment variables available to the source build process and resulting image. Environment files and BuildConfig environment values. Variables provided will be present during the build process and in the output image.
Source build enables you to set environment values, one per line, inside your application, by specifying them in a .s2i/environment
file in the source repository. The environment variables specified in this file are present during the build process and in the output image.
If you provide a .s2i/environment
file in your source repository, source-to-image (S2I) reads this file during the build. This allows customization of the build behavior as the assemble
script may use these variables.
For example, to disable assets compilation for your Rails application during the build:
Add DISABLE_ASSET_COMPILATION=true
in the .s2i/environment
file.
In addition to builds, the specified environment variables are also available in the running application itself. For example, to cause the Rails application to start in development
mode instead of production
:
Add RAILS_ENV=development
to the .s2i/environment
file.
The complete list of supported environment variables is available in the using images section for each image.
You can add environment variables to the sourceStrategy
definition of the build configuration. The environment variables defined there are visible during the assemble
script execution and will be defined in the output image, making them also available to the run
script and application code.
For example, to disable assets compilation for your Rails application:
sourceStrategy:
...
env:
- name: "DISABLE_ASSET_COMPILATION"
value: "true"
The build environment section provides more advanced instructions.
You can also manage environment variables defined in the build configuration with the oc set env
command.
Source-to-image (S2I) supports a .s2iignore
file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore
file will not be made available to the assemble
script.
Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output.
The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts.
The build process consists of the following three fundamental elements, which are combined into a final container image:
Sources
Source-to-image (S2I) scripts
Builder image
S2I generates a Dockerfile with the builder image as the first FROM
instruction. The Dockerfile generated by S2I is then passed to Buildah.
You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble
/run
/save-artifacts
scripts. All of these locations are checked on each build in the following order:
A script specified in the build configuration.
A script found in the application source .s2i/bin
directory.
A script found at the default image URL with the io.openshift.s2i.scripts-url
label.
Both the io.openshift.s2i.scripts-url
label specified in the image and the script specified in a build configuration can take one of the following forms:
image:///path_to_scripts_dir
: absolute path inside the image to a directory where the S2I scripts are located.
file:///path_to_scripts_dir
: relative or absolute path to a directory on the host where the S2I scripts are located.
http(s)://path_to_scripts_dir
: URL to a directory where the S2I scripts are located.
Script | Description | ||
---|---|---|---|
|
The
|
||
|
The |
||
|
The
These dependencies are gathered into a |
||
|
The |
||
|
The
|
Example S2I scripts
The following example S2I scripts are written in Bash. Each example assumes its tar
contents are unpacked into the /tmp/s2i
directory.
assemble
script:#!/bin/bash
# restore build artifacts
if [ "$(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then
mv /tmp/s2i/artifacts/* $HOME/.
fi
# move the application source
mv /tmp/s2i/src $HOME/src
# build application artifacts
pushd ${HOME}
make all
# install the artifacts
make install
popd
run
script:#!/bin/bash
# run the application
/opt/application/run.sh
save-artifacts
script:#!/bin/bash
pushd ${HOME}
if [ -d deps ]; then
# all deps contents to tar stream
tar cf - deps
fi
popd
usage
script:#!/bin/bash
# inform the user how to use the image
cat <<EOF
This is a S2I sample builder image, to use it, install
https://github.com/openshift/source-to-image
EOF
You can mount build volumes to give running builds access to information that you don’t want to persist in the output container image.
Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image.
The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts.
In the sourceStrategy
definition of the BuildConfig
object, add any build volumes to the volumes
array. For example:
spec:
sourceStrategy:
volumes:
- name: secret-mvn (1)
mounts:
- destinationPath: /opt/app-root/src/.ssh (2)
source:
type: Secret (3)
secret:
secretName: my-secret (4)
- name: settings-mvn (1)
mounts:
- destinationPath: /opt/app-root/src/.m2 (2)
source:
type: ConfigMap (3)
configMap:
name: my-config (4)
- name: my-csi-volume (1)
mounts:
- destinationPath: /opt/app-root/src/some_path (2)
source:
type: CSI (3)
csi:
driver: csi.sharedresource.openshift.io (5)
readOnly: true (6)
volumeAttributes: (7)
attribute: value
1 | Required. A unique name. |
2 | Required. The absolute path of the mount point. It must not contain .. or : and doesn’t collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. |
3 | Required. The type of source, ConfigMap , Secret , or CSI . |
4 | Required. The name of the source. |
5 | Required. The driver that provides the ephemeral CSI volume. |
6 | Optional. If true, this instructs the driver to provide a read-only volume. |
7 | Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver’s documentation for supported attribute keys and values. |
The Shared Resource CSI Driver is supported as a Technology Preview feature. |
The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process.
A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images.
Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds.
You can use the customStrategy.from
section to indicate the image to use for the custom build
Set the customStrategy.from
section:
strategy:
customStrategy:
from:
kind: "DockerImage"
name: "openshift/sti-image-builder"
In addition to secrets for source and images that can be added to all build types, custom strategies allow adding an arbitrary list of secrets to the builder pod.
To mount each secret at a specific location, edit the secretsource
and mountPath
fields of the strategy
YAML file:
strategy:
customStrategy:
secrets:
- secretsource: (1)
name: "secret1"
mountPath: "/tmp/secret1" (2)
- secretsource:
name: "secret2"
mountPath: "/tmp/secret2"
1 | secretsource is a reference to a secret in the same namespace as the build. |
2 | mountPath is the path inside the custom builder where the secret should be mounted. |
To make environment variables available to the custom build process, you can add environment variables to the customStrategy
definition of the build configuration.
The environment variables defined there are passed to the pod that runs the custom build.
Define a custom HTTP proxy to be used during build:
customStrategy:
...
env:
- name: "HTTP_PROXY"
value: "http://myproxy.net:5187/"
To manage environment variables defined in the build configuration, enter the following command:
$ oc set env <enter_variables>
OpenShift Container Platform’s custom build strategy enables you to define a specific builder image responsible for the entire build process. When you need a build to produce individual artifacts such as packages, JARs, WARs, installable ZIPs, or base images, use a custom builder image using the custom build strategy.
A custom builder image is a plain container image embedded with build process logic, which is used for building artifacts such as RPMs or base container images.
Additionally, the custom builder allows implementing any extended build process, such as a CI/CD flow that runs unit or integration tests.
Upon invocation, a custom builder image receives the following environment variables with the information needed to proceed with the build:
Variable Name | Description |
---|---|
|
The entire serialized JSON of the |
|
The URL of a Git repository with source to be built. |
|
Uses the same value as |
|
Specifies the subdirectory of the Git repository to be used when building. Only present if defined. |
|
The Git reference to be built. |
|
The version of the OpenShift Container Platform master that created this build object. |
|
The container image registry to push the image to. |
|
The container image tag name for the image being built. |
|
The path to the container registry credentials for running a |
Although custom builder image authors have flexibility in defining the build process, your builder image must adhere to the following required steps necessary for running a build inside of OpenShift Container Platform:
The Build
object definition contains all the necessary information about input parameters for the build.
Run the build process.
If your build produces an image, push it to the output location of the build if it is defined. Other output locations can be passed with environment variables.
The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their |
The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type.
Pipeline workflows are defined in a jenkinsfile
, either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration.
The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their |
Pipelines give you control over building, deploying, and promoting your applications on OpenShift Container Platform. Using a combination of the Jenkins Pipeline build strategy, jenkinsfiles
, and the OpenShift Container Platform Domain Specific Language (DSL) provided by the Jenkins Client Plugin, you can create advanced build, test, deploy, and promote pipelines for any scenario.
OpenShift Container Platform Jenkins Sync Plugin
The OpenShift Container Platform Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following:
Dynamic job and run creation in Jenkins.
Dynamic creation of agent pod templates from image streams, image stream tags, or config maps.
Injection of environment variables.
Pipeline visualization in the OpenShift Container Platform web console.
Integration with the Jenkins Git plugin, which passes commit information from OpenShift Container Platform builds to the Jenkins Git plugin.
Synchronization of secrets into Jenkins credential entries.
OpenShift Container Platform Jenkins Client Plugin
The OpenShift Container Platform Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an OpenShift Container Platform API Server. The plugin uses the OpenShift Container Platform command line tool, oc
, which must be available on the nodes executing the script.
The Jenkins Client Plugin must be installed on your Jenkins master so the OpenShift Container Platform DSL will be available to use within the jenkinsfile
for your application. This plugin is installed and enabled by default when using the OpenShift Container Platform Jenkins image.
For OpenShift Container Platform Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a jenkinsfile
at the root of your source repository, but also provides the following configuration options:
An inline jenkinsfile
field within your build configuration.
A jenkinsfilePath
field within your build configuration that references the location of the jenkinsfile
to use relative to the source contextDir
.
The optional |
The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their |
The jenkinsfile
uses the standard groovy language syntax to allow fine grained control over the configuration, build, and deployment of your application.
You can supply the jenkinsfile
in one of the following ways:
A file located within your source code repository.
Embedded as part of your build configuration using the jenkinsfile
field.
When using the first option, the jenkinsfile
must be included in your applications source code repository at one of the following locations:
A file named jenkinsfile
at the root of your repository.
A file named jenkinsfile
at the root of the source contextDir
of your repository.
A file name specified via the jenkinsfilePath
field of the JenkinsPipelineStrategy
section of your BuildConfig, which is relative to the source contextDir
if supplied, otherwise it defaults to the root of the repository.
The jenkinsfile
is run on the Jenkins agent pod, which must have the
OpenShift Container Platform client binaries available if you intend to use the OpenShift Container Platform DSL.
To provide the Jenkins file, you can either:
Embed the Jenkins file in the build configuration.
Include in the build configuration a reference to the Git repository that contains the Jenkins file.
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "sample-pipeline"
spec:
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
node('agent') {
stage 'build'
openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true')
stage 'deploy'
openshiftDeploy(deploymentConfig: 'frontend')
}
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "sample-pipeline"
spec:
source:
git:
uri: "https://github.com/openshift/ruby-hello-world"
strategy:
jenkinsPipelineStrategy:
jenkinsfilePath: some/repo/dir/filename (1)
1 | The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . |
The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their |
To make environment variables available to the Pipeline build process, you can add environment variables to the jenkinsPipelineStrategy
definition of the build configuration.
Once defined, the environment variables will be set as parameters for any Jenkins job associated with the build configuration.
To define environment variables to be used during build, edit the YAML file:
jenkinsPipelineStrategy:
...
env:
- name: "FOO"
value: "BAR"
You can also manage environment variables defined in the build configuration with the oc set env
command.
When a Jenkins job is created or updated based on changes to a Pipeline strategy build configuration, any environment variables in the build configuration are mapped to Jenkins job parameters definitions, where the default values for the Jenkins job parameters definitions are the current values of the associated environment variables.
After the Jenkins job’s initial creation, you can still add additional parameters to the job from the Jenkins console. The parameter names differ from the names of the environment variables in the build configuration. The parameters are honored when builds are started for those Jenkins jobs.
How you start builds for the Jenkins job dictates how the parameters are set.
If you start with oc start-build
, the values of the environment variables in the build configuration are the parameters set for the corresponding job instance. Any changes you make to the parameters' default values from the Jenkins console are ignored. The build configuration values take precedence.
If you start with oc start-build -e
, the values for the environment variables specified in the -e
option take precedence.
If you specify an environment variable not listed in the build configuration, they will be added as a Jenkins job parameter definitions.
Any changes you make from the Jenkins console to the parameters corresponding to the environment variables are ignored. The build configuration and what you specify with oc start-build -e
takes precedence.
If you start the Jenkins job with the Jenkins console, then you can control the setting of the parameters with the Jenkins console as part of starting a build for the job.
It is recommended that you specify in the build configuration all possible environment variables to be associated with job parameters. Doing so reduces disk I/O and improves performance during Jenkins processing. |
The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their |
This example demonstrates how to create an OpenShift Container Platform Pipeline that will build, deploy, and verify a Node.js/MongoDB
application using the nodejs-mongodb.json
template.
Create the Jenkins master:
$ oc project <project_name>
Select the project that you want to use or create a new project with oc new-project <project_name>
.
$ oc new-app jenkins-ephemeral (2)
If you want to use persistent storage, use jenkins-persistent
instead.
Create a file named nodejs-sample-pipeline.yaml
with the following content:
This creates a |
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "nodejs-sample-pipeline"
spec:
strategy:
jenkinsPipelineStrategy:
jenkinsfile: <pipeline content from below>
type: JenkinsPipeline
After you create a BuildConfig
object with a jenkinsPipelineStrategy
, tell the
pipeline what to do by using an inline jenkinsfile
:
This example does not set up a Git repository for the application. The following |
def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' (1)
def templateName = 'nodejs-mongodb-example' (2)
pipeline {
agent {
node {
label 'nodejs' (3)
}
}
options {
timeout(time: 20, unit: 'MINUTES') (4)
}
stages {
stage('preamble') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
echo "Using project: ${openshift.project()}"
}
}
}
}
}
stage('cleanup') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
openshift.selector("all", [ template : templateName ]).delete() (5)
if (openshift.selector("secrets", templateName).exists()) { (6)
openshift.selector("secrets", templateName).delete()
}
}
}
}
}
}
stage('create') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
openshift.newApp(templatePath) (7)
}
}
}
}
}
stage('build') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
def builds = openshift.selector("bc", templateName).related('builds')
timeout(5) { (8)
builds.untilEach(1) {
return (it.object().status.phase == "Complete")
}
}
}
}
}
}
}
stage('deploy') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
def rm = openshift.selector("dc", templateName).rollout()
timeout(5) { (9)
openshift.selector("dc", templateName).related('pods').untilEach(1) {
return (it.object().status.phase == "Running")
}
}
}
}
}
}
}
stage('tag') {
steps {
script {
openshift.withCluster() {
openshift.withProject() {
openshift.tag("${templateName}:latest", "${templateName}-staging:latest") (10)
}
}
}
}
}
}
}
1 | Path of the template to use. |
2 | Name of the template that will be created. |
3 | Spin up a node.js agent pod on which to run this build. |
4 | Set a timeout of 20 minutes for this pipeline. |
5 | Delete everything with this template label. |
6 | Delete any secrets with this template label. |
7 | Create a new application from the templatePath . |
8 | Wait up to five minutes for the build to complete. |
9 | Wait up to five minutes for the deployment to complete. |
10 | If everything else succeeded, tag the $ {templateName}:latest image as
$ {templateName}-staging:latest . A pipeline build configuration for the staging
environment can watch for the $ {templateName}-staging:latest image to change
and then deploy it to the staging environment. |
The previous example was written using the declarative pipeline style, but the older scripted pipeline style is also supported. |
Create the Pipeline BuildConfig
in your OpenShift Container Platform cluster:
$ oc create -f nodejs-sample-pipeline.yaml
If you do not want to create your own file, you can use the sample from the Origin repository by running:
$ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml
Start the Pipeline:
$ oc start-build nodejs-sample-pipeline
Alternatively, you can start your pipeline with the OpenShift Container Platform web console by navigating to the Builds → Pipeline section and clicking Start Pipeline, or by visiting the Jenkins Console, navigating to the Pipeline that you created, and clicking Build Now. |
Once the pipeline is started, you should see the following actions performed within your project:
A job instance is created on the Jenkins server.
An agent pod is launched, if your pipeline requires one.
The pipeline runs on the agent pod, or the master if no agent is required.
Any previously created resources with the template=nodejs-mongodb-example
label will be deleted.
A new application, and all of its associated resources, will be created from the nodejs-mongodb-example
template.
A build will be started using the nodejs-mongodb-example
BuildConfig
.
The pipeline will wait until the build has completed to trigger the next stage.
A deployment will be started using the nodejs-mongodb-example
deployment configuration.
The pipeline will wait until the deployment has completed to trigger the next stage.
If the build and deploy are successful, the nodejs-mongodb-example:latest
image will be tagged as nodejs-mongodb-example:stage
.
The agent pod is deleted, if one was required for the pipeline.
The best way to visualize the pipeline execution is by viewing it in the OpenShift Container Platform web console. You can view your pipelines by logging in to the web console and navigating to Builds → Pipelines. |
You can add a secret to your build configuration so that it can access a private repository.
To add a secret to your build configuration so that it can access a private repository from the OpenShift Container Platform web console:
Create a new OpenShift Container Platform project.
Create a secret that contains credentials for accessing a private source code repository.
Create a build configuration.
On the build configuration editor page or in the create app from builder image
page of the web console, set the Source Secret.
Click Save.
You can enable pulling to a private registry by setting the pull secret and pushing by setting the push secret in the build configuration.
To enable pulling to a private registry:
Set the pull secret in the build configuration.
To enable pushing:
Set the push secret in the build configuration.