$ openstack role add --user <user> --project <project> swiftoperator
In OpenShift Container Platform 4.4, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content.
Create a mirror registry on your bastion host
and obtain the imageContentSources
data for your version of OpenShift Container Platform.
Because the installation media is on the bastion host, use that computer to complete all installation steps. |
Review details about the OpenShift Container Platform installation and update processes.
Verify that OpenShift Container Platform 4.4 is compatible with your RHOSP version by consulting the architecture documentation’s list of available platforms. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix.
Verify that your network configuration does not rely on a provider network. Provider networks are not supported.
Have the metadata service enabled in RHOSP.
In OpenShift Container Platform 4.4, you can perform an installation that does not require an active connection to the Internet to obtain software components. You complete an installation in a restricted network on only infrastructure that you provision, not infrastructure that the installation program provisions, so your platform selection is limited.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s IAM service, require Internet access, so you might still require Internet access. Depending on your network, you might require less Internet access for an installation on bare metal hardware or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the Internet and your closed network, or by using other methods that meet your restrictions.
Clusters in restricted networks have the following additional limitations and restrictions:
The ClusterVersion
status includes an Unable to retrieve available updates
error.
By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements:
Resource | Value |
---|---|
Floating IP addresses |
3 |
Ports |
15 |
Routers |
1 |
Subnets |
1 |
RAM |
112 GB |
vCPUs |
28 |
Volume storage |
275 GB |
Instances |
7 |
Security groups |
3 |
Security group rules |
60 |
A cluster might function with fewer than recommended resources, but its performance is not guaranteed.
If RHOSP object storage (Swift) is available and operated by a user account with the |
By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them.
|
An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine.
By default, the OpenShift Container Platform installation process stands up three control plane and three compute machines.
Each machine requires:
An instance from the RHOSP quota
A port from the RHOSP quota
A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space
Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. |
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.
The bootstrap machine requires:
An instance from the RHOSP quota
A port from the RHOSP quota
A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space
In OpenShift Container Platform 4.4, you require access to the Internet to install your cluster. The Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM).
Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
You must have Internet access to:
Access the Red Hat OpenShift Cluster Manager page to download the installation program and perform subscription management. If the cluster has Internet access and you do not disable Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. |
Swift is operated by a user account with the swiftoperator
role. Add the role to an account before you run the installation program.
If the Red Hat OpenStack Platform (RHOSP) object storage service, commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. |
You have a RHOSP administrator account on the target environment.
The Swift service is installed.
On Ceph RGW, the account in url
option is enabled.
To enable Swift on RHOSP:
As an administrator in the RHOSP CLI, add the swiftoperator
role to the account that will access Swift:
$ openstack role add --user <user> --project <project> swiftoperator
Your RHOSP deployment can now use Swift for the image registry.
The OpenShift Container Platform installation program relies on a file that is called clouds.yaml
. The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs.
Create the clouds.yaml
file:
If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml
file in it.
Remember to add a password to the |
If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml
, see Config files in the RHOSP documentation.
clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'
If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication:
Copy the certificate authority file to your machine.
In the command line, run the following commands to add the machine to the certificate authority trust bundle:
$ sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/ $ sudo update-ca-trust extract
Add the cacerts
key to the clouds.yaml
file. The value must be an absolute, non-root-accessible path to the CA certificate:
clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the $ oc edit configmap -n openshift-config cloud-provider-config |
Place the clouds.yaml
file in one of the following locations:
The value of the OS_CLIENT_CONFIG_FILE
environment variable
The current directory
A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml
The installation program searches for clouds.yaml
in that order.
Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted-network Red Hat OpenStack Platform (RHOSP) environment.
Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your bastion host.
Log in to the Red Hat Customer Portal’s Product Downloads page.
Under Version, select the most recent release of OpenShift Container Platform 4.4 for RHEL 8.
The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. |
Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW).
Decompress the image.
You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like $ file <name_of_downloaded_file> |
Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example:
$ openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-${RHCOS_VERSION}
Depending on your RHOSP environment, you might be able to upload the image in either |
If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. |
The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment.
You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP).
Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your bastion host.
Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location.
Have the imageContentSources
values that were generated during mirror registry creation.
Create the install-config.yaml
file.
Run the following command:
$ ./openshift-install create install-config --dir=<installation_directory> (1)
1 | For <installation_directory> , specify the directory name to store the
files that the installation program creates. |
Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. |
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your |
Select openstack as the platform to target.
Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster.
Specify the floating IP address to use for external access to the OpenShift API.
Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute nodes.
Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.
Enter a name for your cluster. The name must be 14 or fewer characters long.
Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site.
In install-config.yaml
, set the value of platform.openstack.clusterOSImage
to the image location or name. For example:
platform:
openstack:
clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d
Edit the install-config.yaml
file to provide the additional information that
is required for an installation in a restricted network.
Update the pullSecret
value to contain the authentication information for
your registry:
pullSecret: '{"auths":{"<bastion_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'
For <bastion_host_name>
, specify the registry domain name
that you specified in the certificate for your mirror registry, and for
<credentials>
, specify the base64-encoded user name and password for
your mirror registry.
Add the additionalTrustBundle
parameter and value.
additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----
The value must be the contents of the certificate file that you used for your mirror registry, which can be an exiting, trusted certificate authority or the self-signed certificate that you generated for the mirror registry.
Add the image content resources, which look like this excerpt:
imageContentSources: - mirrors: - <bastion_host_name>:5000/<repo_name>/release source: quay.example.com/openshift-release-dev/ocp-release - mirrors: - <bastion_host_name>:5000/<repo_name>/release source: registry.example.com/ocp/release
To complete these values, use the imageContentSources
that you recorded during mirror registry creation.
Make any other modifications to the install-config.yaml
file that you require. You can find more information about
the available parameters in the Installation configuration parameters section.
Back up the install-config.yaml
file so that you can use
it to install multiple clusters.
The |
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to
describe your account on the cloud platform that hosts your cluster
and optionally customize your
cluster’s platform. When you create the install-config.yaml
installation
configuration file, you provide values for the required parameters through the
command line. If you customize your cluster, you can modify the
install-config.yaml
file to provide more details about the platform.
You cannot modify these parameters in the |
Parameter | Description | Values |
---|---|---|
|
The base domain of your cloud provider. This value is used to create routes
to your OpenShift Container Platform cluster components. The full DNS name for your cluster
is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
The cloud provider to host the control plane machines. This parameter value
must match the |
|
|
The cloud provider to host the worker machines. This parameter value
must match the |
|
|
The name of your cluster. |
A string that contains uppercase or lowercase letters, such as |
|
The region to deploy your cluster in. |
A valid region for your cloud, such as |
|
The pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site. You use this pull secret to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. |
|
Parameter | Description | Values | ||
---|---|---|---|---|
|
The SSH key to use to access your cluster machines.
|
A valid, local public SSH key that you added to the |
||
|
Whether to enable or disable fips mode. By default, fips mode is not enabled. If fips mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. |
|
||
|
How to publish the user-facing endpoints of your cluster. |
|
||
|
Whether to enable or disable simultaneous multithreading, or
|
|
||
|
The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
||
|
Whether to enable or disable simultaneous multithreading, or
|
|
||
|
The number of control plane machines to provision. |
The only supported value is |
Parameter | Description | Values |
---|---|---|
|
For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. |
Integer, for example |
|
For compute machines, the root volume’s type. |
String, for example |
|
For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. |
Integer, for example |
|
For control plane machines, the root volume’s type. |
String, for example |
|
The name of the RHOSP cloud to use from the list of clouds in the
|
String, for example |
|
The RHOSP flavor to use for control plane and compute machines. |
String, for example |
|
The RHOSP external network name to be used for installation. |
String, for example |
|
An existing floating IP address to associate with the load balancer API. |
An IP address, for example |
Parameter | Description | Values |
---|---|---|
|
The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. |
An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, The value can also be the name of an existing Glance image, for example |
|
The default machine pool platform configuration. |
|
|
IP addresses for external DNS servers that cluster instances use for DNS resolution. |
A list of IP addresses as strings, for example |
install-config.yaml
file for restricted OpenStack installationsThis sample install-config.yaml
demonstrates all of the possible Red Hat OpenStack Platform (RHOSP)
customization options.
This sample file is provided for reference only. You must obtain your
|
apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineCIDR: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: OpenShiftSDN
platform:
openstack:
region: region1
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
imageContentSources:
- mirrors:
- <mirror_registry>/<repo_name>/release
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- <mirror_registry>/<repo_name>/release
source: registry.svc.ci.openshift.org/ocp/release
If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent
and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.
In a production environment, you require disaster recovery and debugging. |
You can use this key to SSH into the master nodes as the user core
. When you
deploy the cluster, the key is added to the core
user’s
~/.ssh/authorized_keys
list.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_rsa , of the SSH key.
Do not specify an existing SSH key, as it will be overwritten. |
Running this command generates an SSH key that does not require a password in the location that you specified.
Start the ssh-agent
process as a background task:
$ eval "$(ssh-agent -s)" Agent pid 31874
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1) Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa |
When you install OpenShift Container Platform, provide the SSH public key to the installation program.
At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments.
You can configure the OpenShift Container Platform API and applications that run on the cluster to be accessible with or without floating IP addresses.
Create two floating IP (FIP) addresses: one for external access to the OpenShift Container Platform API, the API FIP
, and one for OpenShift Container Platform applications, the apps FIP
.
The API FIP is also used in the install-config.yaml file.
|
Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:
$ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external network>
Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:
$ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external network>
To reflect the new fips, add records that follow these patterns to your DNS server:
api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>
If you do not control the DNS server you can add the record to your |
You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. |
If you cannot use floating IP addresses, the OpenShift Container Platform installation might still finish. However, the installation program fails after it times out waiting for API access.
After the installation program times out, the cluster might still initialize. After the bootstrapping processing begins, it must complete. You must edit the cluster’s networking configuration after it is deployed.
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the |
Configure an account with the cloud platform that hosts your cluster.
Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Run the installation program:
$ ./openshift-install create cluster --dir=<installation_directory> \ (1) --log-level=info (2)
1 | For <installation_directory> , specify the |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. |
When the cluster deployment completes, directions for accessing your cluster,
including a link to its web console and credentials for the kubeadmin
user,
display in your terminal.
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending |
You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
You can verify your OpenShift Container Platform cluster’s status during or after installation.
In the cluster environment, export the administrator’s kubeconfig file:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
View the control plane and compute machines created after a deployment:
$ oc get nodes
View your cluster’s version:
$ oc get clusterversion
View your Operators' status:
$ oc get clusteroperator
View all running pods in the cluster:
$ oc get pods -A
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OpenShift Container Platform installation.
Deploy an OpenShift Container Platform cluster.
Install the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami system:admin
After you install OpenShift Container Platform, configure Red Hat OpenStack Platform (RHOSP) to allow application network traffic.
OpenShift Container Platform cluster must be installed
Floating IP addresses are enabled as described in Enabling access to the environment.
After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress port:
Show the port:
$ openstack port show <cluster name>-<clusterID>-ingress-port
Attach the port to the IP address:
$ openstack floating ip set --port <ingress port ID> <apps FIP>
Add a wildcard A
record for *apps.
to your DNS file:
*.apps.<cluster name>.<base domain> IN A <apps FIP>
If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to
|
If necessary, you can opt out of remote health reporting.
Learn how to use Operator Lifecycle Manager (OLM) on restricted networks.