ICMP
In OpenShift Container Platform version 4.11, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure.
OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. |
You reviewed details about the OpenShift Container Platform installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide
ReadWriteMany
access modes.
The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible.
If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
Be sure to also review this site list if you are configuring a proxy. |
In OpenShift Container Platform 4.11, you require access to the internet to install your cluster.
You must have internet access to:
Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. |
You must install the OpenShift Container Platform cluster on a VMware vSphere version 7 instance that meets the requirements for the components that you use.
OpenShift Container Platform version 4.11 does not support VMware vSphere version 8.0. |
You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table:
Virtual environment product | Required version |
---|---|
VM hardware version |
15 or later |
vSphere ESXi hosts |
7 |
vCenter host |
7 |
Installing a cluster on VMware vSphere version 7.0 Update 1 or earlier is now deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform requires vSphere virtual hardware version 15 or later. Hardware version 15 is now the default for vSphere virtual machines in OpenShift Container Platform. To update the hardware version for your vSphere nodes, see the "Updating hardware on nodes running in vSphere" article. If your vSphere nodes are below hardware version 15 or your VMware vSphere version is earlier than 6.7.3, upgrading from OpenShift Container Platform 4.10 to OpenShift Container Platform 4.11 is not available. |
Component | Minimum supported versions | Description |
---|---|---|
Hypervisor |
vSphere 7 with HW version 15 |
This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. |
Storage with in-tree drivers |
vSphere 7 |
This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. |
Optional: Networking (NSX-T) |
vSphere 7 |
vSphere 7 is required for OpenShift Container Platform. For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware’s NSX container plugin documentation. |
You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. |
You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate.
Review the following details about the required network ports.
Protocol | Port | Description |
---|---|---|
ICMP |
N/A |
Network reachability tests |
TCP |
|
Metrics |
|
Host level services, including the node exporter on ports |
|
|
The default ports that Kubernetes reserves |
|
|
openshift-sdn |
|
UDP |
|
virtual extensible LAN (VXLAN) |
|
Geneve |
|
|
Host level services, including the node exporter on ports |
|
|
IPsec IKE packets |
|
|
IPsec NAT-T packets |
|
TCP/UDP |
|
Kubernetes node port |
ESP |
N/A |
IPsec Encapsulating Security Payload (ESP) |
Protocol | Port | Description |
---|---|---|
TCP |
|
Kubernetes API |
Protocol | Port | Description |
---|---|---|
TCP |
|
etcd server and peer ports |
To install the vSphere CSI Driver Operator, the following requirements must be met:
VMware vSphere version 7.0 Update 1 or later
Virtual machines of hardware version 15 or later
No third-party vSphere CSI driver already installed in the cluster
If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. If you continue with the third-party vSphere CSI driver when upgrading to the next major version of OpenShift Container Platform, the
The previous message informs you that Red Hat does not support the third-party vSphere CSI driver during an OpenShift Container Platform upgrade operation. You can choose to ignore this message and continue with the upgrade operation. |
To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver.
To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere.
Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment.
To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions.
If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges.
An additional role is required if the installation program is to create a vSphere virtual machine folder.
vSphere object for role | When required | Required privileges in vSphere API |
---|---|---|
vSphere vCenter |
Always |
|
vSphere vCenter Cluster |
If VMs will be created in the cluster root |
|
vSphere vCenter Resource Pool |
If an existing resource pool is provided |
|
vSphere Datastore |
Always |
|
vSphere Port Group |
Always |
|
Virtual Machine Folder |
Always |
|
vSphere vCenter Datacenter |
If the installation program creates the virtual machine folder |
|
vSphere object for role | When required | Required privileges in vCenter GUI |
---|---|---|
vSphere vCenter |
Always |
|
vSphere vCenter Cluster |
If VMs will be created in the cluster root |
|
vSphere vCenter Resource Pool |
If an existing resource pool is provided |
|
vSphere Datastore |
Always |
|
vSphere Port Group |
Always |
|
Virtual Machine Folder |
Always |
|
vSphere vCenter Datacenter |
If the installation program creates the virtual machine folder |
|
Additionally, the user requires some ReadOnly
permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder.
vSphere object | When required | Propagate to children | Permissions required |
---|---|---|---|
vSphere vCenter |
Always |
False |
Listed required privileges |
vSphere vCenter Datacenter |
Existing folder |
False |
|
Installation program creates the folder |
True |
Listed required privileges |
|
vSphere vCenter Cluster |
Existing resource pool |
False |
|
VMs in cluster root |
True |
Listed required privileges |
|
vSphere vCenter Datastore |
Always |
False |
Listed required privileges |
vSphere Switch |
Always |
False |
|
vSphere Port Group |
Always |
False |
Listed required privileges |
vSphere vCenter Virtual Machine Folder |
Existing folder |
True |
Listed required privileges |
vSphere vCenter Resource Pool |
Existing resource pool |
True |
Listed required privileges |
For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.
If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster.
OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion.
To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues.
For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules.
Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss.
OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs.
When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance.
A standard OpenShift Container Platform installation creates the following vCenter resources:
1 Folder
1 Tag category
1 Tag
Virtual machines:
1 template
1 temporary bootstrap node
3 control plane nodes
3 compute machines
Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster.
If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage.
Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks.
You must use the Dynamic Host Configuration Protocol (DHCP) for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. In the DHCP lease, you must configure the DHCP to use the default gateway. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster:
It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. |
An installer-provisioned vSphere installation requires two static IP addresses:
The API address is used to access the cluster API.
The Ingress address is used for cluster ingress traffic.
You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster.
You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name>
is the cluster name and <base_domain>
is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>.
.
Component | Record | Description |
---|---|---|
API VIP |
|
This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
Ingress VIP |
|
A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. |
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OpenShift Container Platform, provide the SSH public key to the installation program.
Before you install OpenShift Container Platform, download the installation file on a local computer.
You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space.
If you attempt to run the installation program on macOS, a known issue related to the |
Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. |
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. |
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Because the installation program requires access to your vCenter’s API, you must add your vCenter’s trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster.
From the vCenter home page, download the vCenter’s root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip
file downloads.
Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure:
certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files
Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command:
# cp certs/lin/* /etc/pki/ca-trust/source/anchors
Update your system trust. For example, on a Fedora operating system, run the following command:
# update-ca-trust extract
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the |
Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
directory name to store the files that the installation program creates. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
When specifying the directory:
Verify that the directory has the execute
permission. This permission is required to run Terraform binaries under the installation directory.
Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Provide values at the prompts:
Optional: Select an SSH key to use to access your cluster machines.
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your |
Select vsphere as the platform to target.
Specify the name of your vCenter instance.
Specify the user name and password for the vCenter account that has the required permissions to create the cluster.
The installation program connects to your vCenter instance.
Some VMware vCenter Single Sign-On (SSO) environments with Active Directory (AD) integration might primarily require you to use the traditional login method, which requires the To ensure that vCenter account permission checks complete properly, consider using the User Principal Name (UPN) login method, such as |
Select the data center in your vCenter instance to connect to.
Select the datacenter in your vCenter instance to connect to.
Select the default vCenter datastore to use.
Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. |
Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool.
Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.
Enter the virtual IP address that you configured for control plane API access.
Enter the virtual IP address that you configured for cluster ingress.
Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured.
Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured.
Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. |
Paste the pull secret from the Red Hat OpenShift Cluster Manager.
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin
user.
Credential information also outputs to <installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
You can install the OpenShift CLI (oc
) to interact with OpenShift Container Platform from a
command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the architecture in the Product Variant drop-down menu.
Select the appropriate version in the Version drop-down menu.
Click Download Now next to the OpenShift v4.11 Linux Client entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version in the Version drop-down menu.
Click Download Now next to the OpenShift v4.11 Windows Client entry and save the file.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version in the Version drop-down menu.
Click Download Now next to the OpenShift v4.11 macOS Client entry and save the file.
For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. |
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OpenShift Container Platform installation.
You deployed an OpenShift Container Platform cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
After you install the cluster, you must create storage for the registry Operator.
On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed
. This allows openshift-installer
to complete installations on these platform types.
After installation, you must edit the Image Registry Operator configuration to switch the managementState
from Removed
to Managed
.
The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.
Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.
Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate
rollout strategy during upgrades.
As a cluster administrator, following installation you must configure your registry to use storage.
Cluster administrator permissions.
A cluster on VMware vSphere.
Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation.
OpenShift Container Platform supports |
Must have "100Gi" capacity.
Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. |
To configure your registry to use storage, change the spec.storage.pvc
in the configs.imageregistry/cluster
resource.
When you use shared storage, review your security settings to prevent outside access. |
Verify that you do not have a registry pod:
$ oc get pod -n openshift-image-registry -l docker-registry=default
No resourses found in openshift-image-registry namespace
If you do have a registry pod in your output, you do not need to continue with this procedure. |
Check the registry configuration:
$ oc edit configs.imageregistry.operator.openshift.io
storage:
pvc:
claim: (1)
1 | Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. |
Check the clusteroperator
status:
$ oc get clusteroperator image-registry
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
image-registry 4.7 True False False 6h50m
To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate
rollout strategy.
Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. |
Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate
rollout strategy, and runs with only 1
replica:
$ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}'
Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode.
Create a pvc.yaml
file with the following contents to define a VMware vSphere PersistentVolumeClaim
object:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: image-registry-storage (1)
namespace: openshift-image-registry (2)
spec:
accessModes:
- ReadWriteOnce (3)
resources:
requests:
storage: 100Gi (4)
1 | A unique name that represents the PersistentVolumeClaim object. |
2 | The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . |
3 | The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. |
4 | The size of the persistent volume claim. |
Enter the following command to create the PersistentVolumeClaim
object from the file:
$ oc create -f pvc.yaml -n openshift-image-registry
Enter the following command to edit the registry configuration so that it references the correct PVC:
$ oc edit config.imageregistry.operator.openshift.io -o yaml
storage:
pvc:
claim: (1)
1 | By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. |
For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere.
OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information.
To create a backup of persistent volumes:
Stop the application that is using the persistent volume.
Clone the persistent volume.
Restart the application.
Create a backup of the cloned volume.
Delete the cloned volume.
In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console.
After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
See About remote health monitoring for more information about the Telemetry service
You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer.
Configuring an external load balancer depends on your vendor’s load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor’s load balancer. |
Red Hat supports the following services for an external load balancer:
Ingress Controller
OpenShift API
OpenShift MachineConfig API
You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams:
For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller’s load balancer, and API load balancer. Check the vendor’s documentation for this capability.
For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions:
Assign a static IP address to each control plane node.
Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment.
Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur.
You defined a front-end IP address.
TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items:
Port 6443 provides access to the OpenShift API service.
Port 22623 can provide ignition startup configurations to nodes.
The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster.
The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes.
The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623.
You defined a front-end IP address.
TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer.
The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster.
The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster.
The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936.
You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services.
The following examples demonstrate health check specifications for the previously listed backend services:
Path: HTTPS:6443/readyz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
Path: HTTPS:22623/healthz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
Path: HTTP:1936/healthz/ready
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 5
Interval: 10
Configure the haproxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80:
#...
listen my-cluster-api-6443
bind 192.168.1.100:6443
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /readyz
http-check expect status 200
server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2
server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2
server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2
listen my-cluster-machine-config-api-22623
bind 192.168.1.1000.0.0.0:22623
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /healthz
http-check expect status 200
server my-cluster-master-2 192.0168.21.2101:22623 check inter 10s rise 2 fall 2
server my-cluster-master-0 192.168.1.1020.2.3:22623 check inter 10s rise 2 fall 2
server my-cluster-master-1 192.168.1.1030.2.1:22623 check inter 10s rise 2 fall 2
listen my-cluster-apps-443
bind 192.168.1.100:443
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /healthz/ready
http-check expect status 200
server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2
listen my-cluster-apps-80
bind 192.168.1.100:80
mode tcp
balance roundrobin
option httpchk
http-check connect
http-check send meth GET uri /healthz/ready
http-check expect status 200
server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2
server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2
# ...
Use the curl
CLI command to verify that the external load balancer and its resources are operational:
Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response:
$ curl https://<loadbalancer_ip_address>:6443/version --insecure
If the configuration is correct, you receive a JSON object in response:
{
"major": "1",
"minor": "11+",
"gitVersion": "v1.11.0+ad103ed",
"gitCommit": "ad103ed",
"gitTreeState": "clean",
"buildDate": "2019-01-09T06:44:10Z",
"goVersion": "go1.10.3",
"compiler": "gc",
"platform": "linux/amd64"
}
Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output:
$ curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK
Content-Length: 0
Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output:
$ curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 302 Found
content-length: 0
location: https://console-openshift-console.apps.ocp4.private.opequon.net/
cache-control: no-cache
Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output:
$ curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Wed, 04 Oct 2023 16:29:38 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None
cache-control: private
Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer.
<load_balancer_ip_address> A api.<cluster_name>.<base_domain>
A record pointing to Load Balancer Front End
<load_balancer_ip_address> A apps.<cluster_name>.<base_domain>
A record pointing to Load Balancer Front End
DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. |
Use the curl
CLI command to verify that the external load balancer and DNS record configuration are operational:
Verify that you can access the cluster API, by running the following command and observing the output:
$ curl https://api.<cluster_name>.<base_domain>:6443/version --insecure
If the configuration is correct, you receive a JSON object in response:
{
"major": "1",
"minor": "11+",
"gitVersion": "v1.11.0+ad103ed",
"gitCommit": "ad103ed",
"gitTreeState": "clean",
"buildDate": "2019-01-09T06:44:10Z",
"goVersion": "go1.10.3",
"compiler": "gc",
"platform": "linux/amd64"
}
Verify that you can access the cluster machine configuration, by running the following command and observing the output:
$ curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK
Content-Length: 0
Verify that you can access each cluster application on port, by running the following command and observing the output:
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 302 Found
content-length: 0
location: https://console-openshift-console.apps.<cluster-name>.<base domain>/
cache-control: no-cacheHTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Tue, 17 Nov 2020 08:42:10 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None
cache-control: private
Verify that you can access each cluster application on port 443, by running the following command and observing the output:
$ curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Wed, 04 Oct 2023 16:29:38 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None
cache-control: private
If necessary, you can opt out of remote health reporting.
Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues.