# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KeY-redhat-release
Frequently, portions of a datacenter may not have access to the Internet, even via proxy servers. Installing OpenShift enterprise in these environments is considered a disconnected installation.
An OpenShift enterprise disconnected installation differs from a regular installation in two primary ways:
The OpenShift enterprise software channels and repositories are not available via Red Hat’s content distribution network.
OpenShift enterprise uses several containerized components. Normally, these images are pulled directly from Red Hat’s Docker registry. In a disconnected environment, this is not possible.
A disconnected installation ensures the OpenShift enterprise software is made available to the relevant servers, then follows the same installation process as a standard connected installation. This topic additionally details how to manually download the Docker images and transport them onto the relevant servers.
Once installed, in order to use OpenShift enterprise, you will need source code in a source control repository (for example, Git). This topic assumes that an internal Git repository is available that can host source code and this repository is accessible from the OpenShift enterprise nodes. Installing the source control repository is outside the scope of this document.
Also, when building applications in OpenShift enterprise, your build may have some external dependencies, such as a Maven Repository or Gem files for Ruby applications. For this reason, and because they might require certain tags, many of the Quickstart templates offered by OpenShift enterprise may not work on a disconnected environment. However, while Red Hat Docker images try to reach out to external repositories by default, you can configure OpenShift enterprise to use your own internal repositories. For the purposes of this document, we assume that such internal repositories already exist and are accessible from the OpenShift enterprise nodes hosts. Installing such repositories is outside the scope of this document.
You can also have a Red Hat Satellite server that provides access to Red Hat content via an intranet or LAN. For environments with Satellite, you can synchronize the OpenShift enterprise software onto the Satellite for use with the OpenShift enterprise servers. Red Hat Satellite 6.1 also introduces the ability to act as a Docker registry, and it can be used to host the OpenShift enterprise containerized components. Doing so is outside of the scope of this document. |
This document assumes that you understand OpenShift’s overall architecture and that you have already planned out what the topology of your environment will look like.
In order to pull down the required software repositories and Docker images, you will need a Red Hat enterprise Linux (RHeL) 7 server with access to the Internet and at least 100GB of additional free space. All steps in this section should be performed on the Internet-connected server as the root system user.
Before you sync with the required repositories, you may need to import the appropriate GPG key:
# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KeY-redhat-release
If the key is not imported, the indicated package is deleted after syncing the repository.
To sync the required repositories:
Register the server with the Red Hat Customer Portal. You must use the login and password associated with the account that has access to the OpenShift enterprise subscriptions:
# subscription-manager register
Attach to a subscription that provides OpenShift enterprise channels. You can find the list of available subscriptions using:
# subscription-manager list --available
Then, find the pool ID for the subscription that provides OpenShift enterprise, and attach it:
# subscription-manager attach --pool=<pool_id> # subscription-manager repos --disable="*" # subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-3.1-rpms"
The yum-utils
command provides the reposync utility, which lets you mirror
yum repositories, and createrepo
can create a usable yum
repository from a
directory:
# yum -y install yum-utils createrepo docker git
You will need up to 110GB of free space in order to sync the software. Depending on how restrictive your organization’s policies are, you could re-connect this server to the disconnected LAN and use it as the repository server. You could use USB-connected storage and transport the software to another server that will act as the repository server. This topic covers these options.
Make a path to where you want to sync the software (either locally or on your USB or other device):
# mkdir -p </path/to/repos>
Sync the packages and create the repository for each of them. You will need to modify the command for the appropriate path you created above:
# for repo in \ rhel-7-server-rpms rhel-7-server-extras-rpms \ rhel-7-server-ose-3.1-rpms do reposync --gpgcheck -lm --repoid=${repo} --download_path=/path/to/repos createrepo -v </path/to/repos/>${repo} -o </path/to/repos/>${repo} done
To sync the Docker images:
Start the Docker daemon:
# systemctl start docker
Pull all of the required OpenShift enterprise containerized components:
# docker pull registry.access.redhat.com/openshift3/ose-haproxy-router:v3.1.1.11 # docker pull registry.access.redhat.com/openshift3/ose-deployer:v3.1.1.11 # docker pull registry.access.redhat.com/openshift3/ose-sti-builder:v3.1.1.11 # docker pull registry.access.redhat.com/openshift3/ose-docker-builder:v3.1.1.11 # docker pull registry.access.redhat.com/openshift3/ose-pod:v3.1.1.11 # docker pull registry.access.redhat.com/openshift3/ose-docker-registry:v3.1.1.11
Pull all of the required OpenShift enterprise containerized components for the additional centralized log aggregation and metrics aggregation components:
# docker pull registry.access.redhat.com/openshift3/logging-deployment # docker pull registry.access.redhat.com/openshift3/logging-elasticsearch # docker pull registry.access.redhat.com/openshift3/logging-kibana # docker pull registry.access.redhat.com/openshift3/logging-fluentd # docker pull registry.access.redhat.com/openshift3/logging-auth-proxy # docker pull registry.access.redhat.com/openshift3/metrics-deployer # docker pull registry.access.redhat.com/openshift3/metrics-hawkular-metrics # docker pull registry.access.redhat.com/openshift3/metrics-cassandra # docker pull registry.access.redhat.com/openshift3/metrics-heapster
Pull the Red Hat-certified Source-to-Image (S2I) builder images that you intend to use in your OpenShift environment. You can pull the following images:
jboss-eap70-openshift
jboss-amq-62
jboss-datagrid65-openshift
jboss-decisionserver62-openshift
jboss-eap64-openshift
jboss-eap70-openshift
jboss-webserver30-tomcat7-openshift
jboss-webserver30-tomcat8-openshift
mongodb
mysql
nodejs
perl
php
postgresql
python
redhat-sso70-openshift
ruby
Make sure to indicate the correct tag specifying the desired version number. For example, to pull both the previous and latest version of the Tomcat image:
# docker pull \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest # docker pull \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1
Docker images can be exported from a system by first saving them to a tarball and then transporting them:
Make and change into a repository home directory:
# mkdir </path/to/repos/images> # cd </path/to/repos/images>
export the OpenShift enterprise containerized components:
# docker save -o ose3-images.tar \ registry.access.redhat.com/openshift3/ose-haproxy-router \ registry.access.redhat.com/openshift3/ose-deployer \ registry.access.redhat.com/openshift3/ose-sti-builder \ registry.access.redhat.com/openshift3/ose-docker-builder \ registry.access.redhat.com/openshift3/ose-pod \ registry.access.redhat.com/openshift3/ose-docker-registry
If you synchronized the metrics and log aggregation images, export:
# docker save -o ose3-logging-metrics-images.tar \ registry.access.redhat.com/openshift3/logging-deployment \ registry.access.redhat.com/openshift3/logging-elasticsearch \ registry.access.redhat.com/openshift3/logging-kibana \ registry.access.redhat.com/openshift3/logging-fluentd \ registry.access.redhat.com/openshift3/logging-auth-proxy \ registry.access.redhat.com/openshift3/metrics-deployer \ registry.access.redhat.com/openshift3/metrics-hawkular-metrics \ registry.access.redhat.com/openshift3/metrics-cassandra \ registry.access.redhat.com/openshift3/metrics-heapster
export the S2I builder images that you synced in the previous section. For example, if you synced only the Tomcat image:
# docker save -o ose3-builder-images.tar \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1
During the installation (and for later updates, should you so choose), you will need a webserver to host the repositories. RHeL 7 can provide the Apache webserver.
Option 1: Re-configuring as a Web server
If you can re-connect the server where you synchronized the software and images to your LAN, then you can simply install Apache on the server:
# yum install httpd
Skip to Placing the Software.
Option 2: Building a Repository Server
If you need to build a separate server to act as the repository server, install a new RHeL 7 system with at least 110GB of space. On this repository server during the installation, make sure you select the Basic Web Server option.
If necessary, attach the external storage, and then copy the repository
files into Apache’s root folder. Note that the below copy step (cp -a
) should
be substituted with move (mv
) if you are repurposing the server you used to
sync:
# cp -a /path/to/repos /var/www/html/ # chmod -R +r /var/www/html/repos # restorecon -vR /var/www/html
Add the firewall rules:
# firewall-cmd --permanent --add-service=http # firewall-cmd --reload
enable and start Apache for the changes to take effect:
# systemctl enable httpd # systemctl start httpd
At this point you can perform the initial creation of the hosts that will be part of the OpenShift enterprise environment. It is recommended to use the latest version of RHeL 7 and to perform a minimal installation. You will also want to pay attention to the other OpenShift enterprise-specific prerequisites.
Once the hosts are initially built, the repositories can be set up.
On all of the relevant systems that will need OpenShift enterprise software
components, create the required repository definitions. Place the following text
in the /etc/yum.repos.d/ose.repo file, replacing <server_IP>
with the IP
or host name of the Apache server hosting the software repositories:
[rhel-7-server-rpms] name=rhel-7-server-rpms baseurl=http://<server_IP>/repos/rhel-7-server-rpms enabled=1 gpgcheck=0 [rhel-7-server-extras-rpms] name=rhel-7-server-extras-rpms baseurl=http://<server_IP>/repos/rhel-7-server-extras-rpms enabled=1 gpgcheck=0 [rhel-7-server-ose-3.1-rpms] name=rhel-7-server-ose-3.1-rpms baseurl=http://<server_IP>/repos/rhel-7-server-ose-3.1-rpms enabled=1 gpgcheck=0
At this point, the systems are ready to continue to be prepared following the OpenShift enterprise documentation.
Skip the section titled Registering the Hosts and start with Managing Packages.
To import the relevant components, securely copy the images from the connected host to the individual OpenShift enterprise hosts:
# scp /var/www/html/repos/images/ose3-images.tar root@<openshift_host_name>: # ssh root@<openshift_host_name> "docker load -i ose3-images.tar"
If you prefer, you could use wget
on each OpenShift enterprise host to fetch the
tar file, and then perform the Docker import command locally. Perform the same
steps for the metrics and logging images, if you synchronized them.
On the host that will act as an OpenShift enterprise master, copy and import the builder images:
# scp /var/www/html/images/ose3-builder-images.tar root@<openshift_master_host_name>: # ssh root@<openshift_master_host_name> "docker load -i ose3-builder-images.tar"
You now need to create the internal Docker registry.
In one of the previous steps, the S2I images were imported into the Docker daemon running on one of the OpenShift enterprise master hosts. In a connected installation, these images would be pulled from Red Hat’s registry on demand. Since the Internet is not available to do this, the images must be made available in another Docker registry.
OpenShift enterprise provides an internal registry for storing the images that are built as a result of the S2I process, but it can also be used to hold the S2I builder images. The following steps assume you did not customize the service IP subnet (172.30.0.0/16) or the Docker registry port (5000).
On the master host where you imported the S2I builder images, obtain the service address of your Docker registry that you installed on the master:
# export ReGISTRY=$(oc get service docker-registry -t '{{.spec.clusterIP}}{{"\n"}}')
Next, tag all of the builder images that you synced and exported before pushing them into the OpenShift enterprise Docker registry. For example, if you synced and exported only the Tomcat image:
# docker tag \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1 \ $ReGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.1 # docker tag \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \ $ReGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.2 # docker tag \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \ $ReGISTRY:5000/openshift/webserver30-tomcat7-openshift:latest
Pushing the Docker images into OpenShift enterprise’s Docker registry requires a user with cluster-admin privileges. Because the default OpenShift system administrator does not have a standard authorization token, they cannot be used to log in to the Docker registry.
To create an administrative user:
Create a new user account in the authentication system you are using with
OpenShift enterprise. For example, if you are using local htpasswd
-based
authentication:
# htpasswd -b /etc/openshift/openshift-passwd <admin_username> <password>
The external authentication system now has a user account, but a user must log in to OpenShift enterprise before an account is created in the internal database. Log in to OpenShift enterprise for this account to be created. This assumes you are using the self-signed certificates generated by OpenShift enterprise during the installation:
# oc login --certificate-authority=/etc/origin/master/ca.crt \ -u <admin_username> https://<openshift_master_host>:8443
Get the user’s authentication token:
# MYTOKeN=$(oc whoami -t) # echo $MYTOKeN iwo7hc4XilD2KOLL4V1O55exH2VlPmLD-W2-JOd6Fko
Using oc login
switches to the new user. Switch back to the OpenShift enterprise
system administrator in order to make policy changes:
# oc login -u system:admin
In order to push images into the OpenShift enterprise Docker registry, an account
must have the image-builder
security role. Add this to your OpenShift enterprise
administrative user:
# oadm policy add-role-to-user system:image-builder <admin_username>
Next, add the administrative role to the user in the openshift project. This allows the administrative user to edit the openshift project, and, in this case, push the Docker images:
# oadm policy add-role-to-user admin <admin_username> -n openshift
The openshift project is where all of the image streams for builder images are created by the installer. They are loaded by the installer from the /usr/share/openshift/examples directory. Change all of the definitions by deleting the image streams which had been loaded into OpenShift enterprise’s database, then re-create them:
Delete the existing image streams:
# oc delete is -n openshift --all
Make a backup of the files in /usr/share/openshift/examples/ if you
desire. Next, edit the file image-streams-rhel7.json in the
/usr/share/openshift/examples/image-streams folder. You will find an image
stream section for each of the builder images. edit the spec
stanza to point
to your internal Docker registry.
For example, change:
"spec": { "dockerImageRepository": "registry.access.redhat.com/rhscl/mongodb-26-rhel7",
to:
"spec": { "dockerImageRepository": "172.30.69.44:5000/openshift/mongodb-26-rhel7",
In the above, the repository name was changed from rhscl to openshift. You will need to ensure the change, regardless of whether the repository is rhscl, openshift3, or another directory. every definition should have the following format:
<registry_ip>:5000/openshift/<image_name>
Repeat this change for every image stream in the file. ensure you use the correct IP address that you determined earlier. When you are finished, save and exit. Repeat the same process for the JBoss image streams in the /usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json file.
Load the updated image stream definitions:
# oc create -f /usr/share/openshift/examples/image-streams/image-streams-rhel7.json -n openshift # oc create -f /usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json -n openshift
At this point the system is ready to load the Docker images.
Log in to the Docker registry using the token and registry service IP obtained earlier:
# docker login -u adminuser -e mailto:adminuser@abc.com \ -p $MYTOKeN $ReGISTRY:5000
Push the Docker images:
# docker push $ReGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.1 # docker push $ReGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.2 # docker push $ReGISTRY:5000/openshift/webserver30-tomcat7-openshift:latest
Verify that all the image streams now have the tags populated:
# oc get imagestreams -n openshift NAMe DOCKeR RePO TAGS UPDATeD jboss-webserver30-tomcat7-openshift $ReGISTRY/jboss-webserver-3/webserver30-jboss-tomcat7-openshift 1.1,1.1-2,1.1-6 + 2 more... 2 weeks ago ...
At this point, the OpenShift enterprise environment is almost ready for use. It is likely that you will want to install and configure a router.