$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
In OpenShift Container Platform version 4.17, you can install a private cluster into an existing VPC on Google Cloud Platform (GCP). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify
parameters in the install-config.yaml
file before you install the cluster.
You reviewed details about the OpenShift Container Platform installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You configured a GCP project to host the cluster.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.
By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. |
To deploy a private cluster, you must:
Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
Deploy from a machine that has access to:
The API services for the cloud to which you provision.
The hosts on the network that you provision.
The internet to obtain installation media.
You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.
To create a private cluster on Google Cloud Platform (GCP), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic.
The cluster still requires access to internet to access the GCP APIs.
The following items are not required or created when you install a private cluster:
Public subnets
Public network load balancers, which support public ingress
A public DNS zone that matches the baseDomain
for the cluster
The installation program does use the baseDomain
that you specify to create a private DNS zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
Because it is not possible to limit access to external load balancers based on source tags, the private cluster uses only internal load balancers to allow access to internal instances.
The internal load balancer relies on instance groups rather than the target pools that the network load balancers use. The installation program creates instance groups for each zone, even if there is no instance in that group.
The cluster IP address is internal only.
One forwarding rule manages both the Kubernetes API and machine config server ports.
The backend service is comprised of each zone’s instance group and, while it exists, the bootstrap instance group.
The firewall uses a single rule that is based on only internal source ranges.
No health check for the Machine config server, /healthz
, runs because of a difference in load balancer functionality. Two internal load balancers cannot share a single IP address, but two network load balancers can share a single external IP address. Instead, the health of an instance is determined entirely by the /readyz
check on port 6443.
In OpenShift Container Platform 4.17, you can deploy a cluster into an existing VPC in Google Cloud Platform (GCP). If you do, you must also use existing subnets within the VPC and routing rules.
By deploying OpenShift Container Platform into an existing GCP VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself.
The installation program will no longer create the following components:
VPC
Subnets
Cloud router
Cloud NAT
NAT IP addresses
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VPC options like DHCP, so you must do so before you install the cluster.
Your VPC and subnets must meet the following characteristics:
The VPC must be in the same GCP project that you deploy the OpenShift Container Platform cluster to.
To allow access to the internet from the control plane and compute machines, you must configure cloud NAT on the subnets to allow egress to it. These machines do not have a public address. Even if you do not require access to the internet, you must allow egress to the VPC network to obtain the installation program and images. Because multiple cloud NATs cannot be configured on the shared subnets, the installation program cannot configure it.
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
All the subnets that you specify exist and belong to the VPC that you specified.
The subnet CIDRs belong to the machine CIDR.
You must provide a subnet to deploy the cluster control plane and compute machines to. You can use the same subnet for both machine types.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted.
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or Ingress rules.
The GCP credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage, and nodes.
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is preserved by firewall rules that reference the machines in your cluster by the cluster’s infrastructure ID. Only traffic within the cluster is allowed.
If you deploy multiple clusters to the same VPC, the following components might share access between clusters:
The API, which is globally available with an external publishing strategy or available throughout the network in an internal publishing strategy
Debugging tools, such as ports on VM instances that are open to the machine CIDR for SSH and ICMP access
In OpenShift Container Platform 4.17, you require access to the internet to install your cluster.
You must have internet access to:
Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. |
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. |
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OpenShift Container Platform, provide the SSH public key to the installation program.
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space.
Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Select your infrastructure provider from the Run it yourself section of the page.
Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
Place the downloaded file in the directory where you want to store the installation configuration files.
|
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. |
Installing the cluster requires that you manually create the installation configuration file.
You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>
You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. |
Customize the sample install-config.yaml
file template that is provided and save
it in the <installation_directory>
.
You must name this configuration file |
Back up the install-config.yaml
file so that you can use it to install multiple clusters.
The |
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Control plane |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Compute |
RHCOS, RHEL 8.6 and later [3] |
2 |
8 GB |
100 GB |
300 |
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
For more information, see RHEL Architectures. |
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
The following Google Cloud Platform instance types have been tested with OpenShift Container Platform.
A2
A3
C2
C2D
C3
C3D
E2
M1
N1
N2
N2D
N4
Tau T2D
The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform.
Tau T2A
Using a custom machine type to install a OpenShift Container Platform cluster is supported.
Consider the following when using a custom machine type:
Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation".
The name of the custom machine type must adhere to the following syntax:
custom-<number_of_cpus>-<amount_of_memory_in_mb>
For example, custom-6-20480
.
As part of the installation process, you specify the custom machine type in the install-config.yaml
file.
install-config.yaml
file with a custom machine typecompute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
gcp:
type: custom-6-20480
replicas: 2
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform:
gcp:
type: custom-6-20480
replicas: 3
You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google’s documentation on Shielded VMs.
Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. |
You have created an install-config.yaml
file.
Use a text editor to edit the install-config.yaml
file prior to deploying your cluster and add one of the following stanzas:
To use shielded VMs for only control plane machines:
controlPlane:
platform:
gcp:
secureBoot: Enabled
To use shielded VMs for only compute machines:
compute:
- platform:
gcp:
secureBoot: Enabled
To use shielded VMs for all machines:
platform:
gcp:
defaultMachinePlatform:
secureBoot: Enabled
You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google’s documentation on Confidential Computing. You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other.
Confidential VMs are currently not supported on 64-bit ARM architectures. |
You have created an install-config.yaml
file.
Use a text editor to edit the install-config.yaml
file prior to deploying your cluster and add one of the following stanzas:
To use confidential VMs for only control plane machines:
controlPlane:
platform:
gcp:
confidentialCompute: Enabled (1)
type: n2d-standard-8 (2)
onHostMaintenance: Terminate (3)
1 | Enable confidential VMs. |
2 | Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types. |
3 | Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. |
To use confidential VMs for only compute machines:
compute:
- platform:
gcp:
confidentialCompute: Enabled
type: n2d-standard-8
onHostMaintenance: Terminate
To use confidential VMs for all machines:
platform:
gcp:
defaultMachinePlatform:
confidentialCompute: Enabled
type: n2d-standard-8
onHostMaintenance: Terminate
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your |
apiVersion: v1
baseDomain: example.com (1)
credentialsMode: Mint (2)
controlPlane: (3) (4)
hyperthreading: Enabled (5)
name: master
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-ssd
diskSizeGB: 1024
encryptionKey: (6)
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
tags: (7)
- control-plane-tag1
- control-plane-tag2
osImage: (8)
project: example-project-name
name: example-image-name
replicas: 3
compute: (3) (4)
- hyperthreading: Enabled (5)
name: worker
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-standard
diskSizeGB: 128
encryptionKey: (6)
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
tags: (7)
- compute-tag1
- compute-tag2
osImage: (8)
project: example-project-name
name: example-image-name
replicas: 3
metadata:
name: test-cluster (1)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes (9)
serviceNetwork:
- 172.30.0.0/16
platform:
gcp:
projectID: openshift-production (1)
region: us-central1 (1)
defaultMachinePlatform:
tags: (7)
- global-tag1
- global-tag2
osImage: (8)
project: example-project-name
name: example-image-name
network: existing_vpc (10)
controlPlaneSubnet: control_plane_subnet (11)
computeSubnet: compute_subnet (12)
pullSecret: '{"auths": ...}' (1)
fips: false (13)
sshKey: ssh-ed25519 AAAA... (14)
publish: Internal (15)
1 | Required. The installation program prompts you for this value. | ||
2 | Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. |
||
3 | If you do not provide these parameters and values, the installation program provides the default value. | ||
4 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. |
||
5 | Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
|
||
6 | Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" → "Creating compute machine sets" → "Creating a compute machine set on GCP". |
||
7 | Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. |
||
8 | Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. |
||
9 | The cluster network plugin to install. The default value OVNKubernetes is the only supported value. |
||
10 | Specify the name of an existing VPC. | ||
11 | Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. | ||
12 | Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. | ||
13 | Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
|
||
14 | You can optionally provide the sshKey value that you use to access the machines in your cluster.
|
||
15 | How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . |
You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers.
You created the install-config.yaml
and complete any modifications to it.
Create an Ingress Controller with global access on a new GCP cluster.
Change to the directory that contains the installation program and create a manifest file:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 | For <installation_directory> , specify the name of the directory that
contains the install-config.yaml file for your cluster. |
Create a file that is named cluster-ingress-default-ingresscontroller.yaml
in the <installation_directory>/manifests/
directory:
$ touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml (1)
1 | For <installation_directory> , specify the directory name that contains the
manifests/ directory for your cluster. |
After creating the file, several network configuration files are in the
manifests/
directory, as shown:
$ ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
cluster-ingress-default-ingresscontroller.yaml
Open the cluster-ingress-default-ingresscontroller.yaml
file in an editor and enter a custom resource (CR) that describes the Operator configuration you want:
clientAccess
configuration to Global
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: default
namespace: openshift-ingress-operator
spec:
endpointPublishingStrategy:
loadBalancer:
providerParameters:
gcp:
clientAccess: Global (1)
type: GCP
scope: Internal (2)
type: LoadBalancerService
1 | Set gcp.clientAccess to Global . |
2 | Global access is only available to Ingress Controllers using internal load balancers. |
Production environments can deny direct access to the internet and instead have
an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform
cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
file.
You have an existing install-config.yaml
file.
You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the proxy
object’s spec.noproxy
field to bypass the proxy if necessary.
The For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the |
Edit your install-config.yaml
file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpproxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsproxy: https://<username>:<pswd>@<ip>:<port> (2)
noproxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 | A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be http . |
2 | A proxy URL to use for creating HTTPS connections outside the cluster. |
3 | A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. |
4 | If provided, the installation program generates a config map that is named user-ca-bundle in
the openshift-config namespace that contains one or more additional CA
certificates that are required for proxying HTTPS connections. The Cluster Network
Operator then creates a trusted-ca-bundle config map that merges these contents
with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the proxy object. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the RHCOS trust
bundle. |
5 | Optional: The policy to determine the configuration of the proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are proxyonly and Always . Use proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is proxyonly . |
The installation program does not support the proxy |
If the installer times out, restart and then complete the deployment by using the
|
Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy
settings in the provided install-config.yaml
file. If no proxy settings are
provided, a cluster
proxy
object is still created, but it will have a nil
spec
.
Only the |
You can install the OpenShift CLI (oc
) to interact with
OpenShift Container Platform
from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the architecture from the Product Variant drop-down list.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.17 Linux Client entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.17 Windows Client entry and save the file.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.17 macOS Client entry and save the file.
For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. |
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
Verify your installation by using an oc
command:
$ oc <command>
By default, administrator secrets are stored in the kube-system
project. If you configured the credentialsMode
parameter in the install-config.yaml
file to Manual
, you must use one of the following alternatives:
To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials.
To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials.
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system
namespace.
Add the following granular permissions to the GCP account that the installation program uses:
compute.machineTypes.list
compute.regions.list
compute.zones.list
dns.changes.create
dns.changes.get
dns.managedZones.create
dns.managedZones.delete
dns.managedZones.get
dns.managedZones.list
dns.networks.bindPrivateDNSZone
dns.resourceRecordSets.create
dns.resourceRecordSets.delete
dns.resourceRecordSets.list
If you did not set the credentialsMode
parameter in the install-config.yaml
configuration file to Manual
, modify the value as shown:
apiVersion: v1
baseDomain: example.com
credentialsMode: Manual
# ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where <installation_directory>
is the directory in which the installation program creates files.
Set a $RELEASE_IMAGE
variable with the release image from your installation file by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of CredentialsRequest
custom resources (CRs) from the OpenShift Container Platform release image by running the following command:
$ oc adm release extract \
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \(1)
--install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
--to=<path_to_directory_for_credentials_requests> (3)
1 | The --included parameter includes only the manifests that your specific cluster configuration requires. |
2 | Specify the location of the install-config.yaml file. |
3 | Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. |
This command creates a YAML file for each CredentialsRequest
object.
CredentialsRequest
objectapiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: GCPProviderSpec
predefinedRoles:
- roles/storage.admin
- roles/iam.serviceAccountUser
skipServiceCheck: true
...
Create YAML files for secrets in the openshift-install
manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef
for each CredentialsRequest
object.
CredentialsRequest
object with secretsapiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
...
secretRef:
name: <component_secret>
namespace: <component_namespace>
...
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: <component_secret>
namespace: <component_namespace>
data:
service_account.json: <base64_encoded_gcp_service_account_file>
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. |
To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster.
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl
) binary.
The |
You have access to an OpenShift Container Platform account with cluster administrator access.
You have installed the OpenShift CLI (oc
).
You have added one of the following authentication options to the GCP account that the installation program uses:
The IAM Workload Identity Pool Admin role.
The following granular permissions:
compute.projects.get
iam.googleapis.com/workloadIdentityPoolProviders.create
iam.googleapis.com/workloadIdentityPoolProviders.get
iam.googleapis.com/workloadIdentityPools.create
iam.googleapis.com/workloadIdentityPools.delete
iam.googleapis.com/workloadIdentityPools.get
iam.googleapis.com/workloadIdentityPools.undelete
iam.roles.create
iam.roles.delete
iam.roles.list
iam.roles.undelete
iam.roles.update
iam.serviceAccounts.create
iam.serviceAccounts.delete
iam.serviceAccounts.getIamPolicy
iam.serviceAccounts.list
iam.serviceAccounts.setIamPolicy
iam.workloadIdentityPoolProviders.get
iam.workloadIdentityPools.delete
resourcemanager.projects.get
resourcemanager.projects.getIamPolicy
resourcemanager.projects.setIamPolicy
storage.buckets.create
storage.buckets.delete
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.objects.create
storage.objects.delete
storage.objects.list
Set a variable for the OpenShift Container Platform release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
Ensure that the architecture of the |
Extract the ccoctl
binary from the CCO container image within the OpenShift Container Platform release image by running the following command:
$ oc image extract $CCO_IMAGE \
--file="/usr/bin/ccoctl.<rhel_version>" \(1)
-a ~/.pull-secret
1 | For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses.
If no value is specified, ccoctl.rhel8 is used by default.
The following values are valid:
|
Change the permissions to make ccoctl
executable by running the following command:
$ chmod 775 ccoctl.<rhel_version>
To verify that ccoctl
is ready to use, display the help file. Use a relative file name when you run the command, for example:
$ ./ccoctl.rhel9
OpenShift credentials provisioning tool
Usage:
ccoctl [command]
Available Commands:
aws Manage credentials objects for AWS cloud
azure Manage credentials objects for Azure
gcp Manage credentials objects for Google cloud
help Help about any command
ibmcloud Manage credentials objects for {ibm-cloud-title}
nutanix Manage credentials objects for Nutanix
Flags:
-h, --help help for ccoctl
Use "ccoctl [command] --help" for more information about a command.
You can use the ccoctl gcp create-all
command to automate the creation of GCP resources.
By default, |
You must have:
Extracted and prepared the ccoctl
binary.
Set a $RELEASE_IMAGE
variable with the release image from your installation file by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of CredentialsRequest
objects from the OpenShift Container Platform release image by running the following command:
$ oc adm release extract \
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \(1)
--install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
--to=<path_to_directory_for_credentials_requests> (3)
1 | The --included parameter includes only the manifests that your specific cluster configuration requires. |
2 | Specify the location of the install-config.yaml file. |
3 | Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. |
This command might take a few moments to run. |
Use the ccoctl
tool to process all CredentialsRequest
objects by running the following command:
$ ccoctl gcp create-all \
--name=<name> \(1)
--region=<gcp_region> \(2)
--project=<gcp_project_id> \(3)
--credentials-requests-dir=<path_to_credentials_requests_directory> (4)
1 | Specify the user-defined name for all created GCP resources used for tracking. |
2 | Specify the GCP region in which cloud resources will be created. |
3 | Specify the GCP project ID in which cloud resources will be created. |
4 | Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. |
If your cluster uses Technology Preview features that are enabled by the |
To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests
directory:
$ ls <path_to_ccoctl_output_dir>/manifests
cluster-authentication-02-config.yaml
openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml
openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml
openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml
openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml
openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml
openshift-image-registry-installer-cloud-credentials-credentials.yaml
openshift-ingress-operator-cloud-credentials-credentials.yaml
openshift-machine-api-gcp-cloud-credentials-credentials.yaml
You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts.
To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl
) created to the correct directories for the installation program.
You have configured an account with the cloud platform that hosts your cluster.
You have configured the Cloud Credential Operator utility (ccoctl
).
You have created the cloud provider resources that are required for your cluster with the ccoctl
utility.
Add the following granular permissions to the GCP account that the installation program uses:
compute.machineTypes.list
compute.regions.list
compute.zones.list
dns.changes.create
dns.changes.get
dns.managedZones.create
dns.managedZones.delete
dns.managedZones.get
dns.managedZones.list
dns.networks.bindPrivateDNSZone
dns.resourceRecordSets.create
dns.resourceRecordSets.delete
dns.resourceRecordSets.list
If you did not set the credentialsMode
parameter in the install-config.yaml
configuration file to Manual
, modify the value as shown:
apiVersion: v1
baseDomain: example.com
credentialsMode: Manual
# ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where <installation_directory>
is the directory in which the installation program creates files.
Copy the manifests that the ccoctl
utility generated to the manifests
directory that the installation program created by running the following command:
$ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
Copy the tls
directory that contains the private key to the installation directory:
$ cp -a /<path_to_ccoctl_output_dir>/tls .
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the |
You have configured an account with the cloud platform that hosts your cluster.
You have the OpenShift Container Platform installation program and the pull secret for your cluster.
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations:
The GOOGLE_CREDENTIALS
, GOOGLE_CLOUD_KEYFILE_JSON
, or GCLOUD_KEYFILE_JSON
environment variables
The ~/.gcp/osServiceAccount.json
file
The gcloud cli
default credentials
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
location of your customized ./install-config.yaml file. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
Optional: You can reduce the number of permissions for the service account that you used to install the cluster.
If you assigned the Owner
role to your service account, you can remove that role and replace it with the Viewer
role.
If you included the Service Account Key Admin
role,
you can remove it.
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin
user.
Credential information also outputs to <installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OpenShift Container Platform installation.
You deployed an OpenShift Container Platform cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
See About remote health monitoring for more information about the Telemetry service
If necessary, you can opt out of remote health reporting.