$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
In OpenShift Container Platform version 4.12, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). In this installation method, the cluster is configured to use a VPC from a different GCP project. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IP addresses from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation.
The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml
file before you install the cluster.
Installing a cluster on GCP into a shared VPC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You reviewed details about the OpenShift Container Platform installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system
namespace, you can manually create and maintain IAM credentials.
You have a GCP host project which contains a shared VPC network.
You configured a GCP project to host the cluster. This project, known as the service project, must be attached to the host project. For more information, see Attaching service projects in the GCP documentation.
You have a GCP service account that has the required GCP permissions in the host project.
In OpenShift Container Platform 4.12, you require access to the internet to install your cluster.
You must have internet access to:
Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. |
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. |
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OpenShift Container Platform, provide the SSH public key to the installation program.
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. |
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. |
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
To install OpenShift Container Platform on Google Cloud Platform (GCP) into a shared VPC, you must generate the install-config.yaml
file and modify it so that the cluster uses the correct VPC networks, dns zones, and project names.
Installing the cluster requires that you manually create the installation configuration file.
You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>
You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. |
Customize the sample install-config.yaml
file template that is provided and save
it in the <installation_directory>
.
You must name this configuration file |
Back up the install-config.yaml
file so that you can use it to install
multiple clusters.
The |
There are several configuration parameters which are required to install OpenShift Container Platform on GCP using a shared VPC. The following is a sample install-config.yaml
file which demonstrates these fields.
This sample YAML file is provided for reference only. You must modify this file with the correct values for your environment and cluster. |
apiVersion: v1
baseDomain: example.com
credentialsMode: Passthrough (1)
metadata:
name: cluster_name
platform:
gcp:
computeSubnet: shared-vpc-subnet-1 (2)
controlPlaneSubnet: shared-vpc-subnet-2 (3)
createFirewallRules: Disabled (4)
network: shared-vpc (5)
networkProjectID: host-project-name (6)
publicdnsZone:
id: public-dns-zone (7)
project: host-project-name (8)
projectID: service-project-name (9)
region: us-east1
defaultMachinePlatform:
tags: (10)
- global-tag1
controlPlane:
name: master
platform:
gcp:
tags: (10)
- control-plane-tag1
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
replicas: 3
compute:
- name: worker
platform:
gcp:
tags: (10)
- compute-tag1
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
replicas: 3
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA... (11)
1 | credentialsMode must be set to Passthrough to allow the cluster to use the provided GCP service account after cluster creation. See the "Prerequisites" section for the required GCP permissions that your service account must have. |
2 | The name of the subnet in the shared VPC for compute machines to use. |
3 | The name of the subnet in the shared VPC for control plane machines to use. |
4 | Optional. If you set createFirewallRules to Disabled , you can create and manage firewall rules manually through the use of network tags. By default, the cluster will automatically create and manage the firewall rules that are required for cluster communication. Your service account must have roles/compute.networkAdmin and roles/compute.securityAdmin privileges in the host project to perform these tasks automatically. If your service account does not have the roles/dns.admin privilege in the host project, it must have the dns.networks.bindPrivatednsZone permission. |
5 | The name of the shared VPC. |
6 | The name of the host project where the shared VPC exists. |
7 | Optional. The name of a public dns zone in the host project. If you set this value, your service account must have the roles/dns.admin privilege in the host project. The public dns zone domain must match the baseDomain parameter. If you do not set this value, the installation program will use the public dns zone in the service project. |
8 | Optional. The name of the host project which contains the public dns zone. This value is required if you specify a public dns zone that exists in another project. |
9 | The name of the GCP project where you want to install the cluster. |
10 | Optional. If you want to manually create and manage your GCP firewall rules, you can set platform.gcp.createFirewallRules to Disabled and then specify one or more network tags. You can set tags on the compute machines, the control plane machines, or all machines. |
11 | You can optionally provide the sshKey value that you use to access the machines in your cluster. |
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
After installation, you cannot modify these parameters in the |
Required installation configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
|
The API version for the |
String |
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full dns name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
Kubernetes resource |
Object |
|
The name of the cluster. dns records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
The configuration for the specific platform upon which to perform the installation: |
Object |
|
Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. |
Parameter | Description | Values | ||
---|---|---|---|---|
|
The configuration for the cluster network. |
Object
|
||
|
The Red Hat OpenShift Networking network plugin to install. |
Either |
||
|
The IP address blocks for pods. The default value is If you specify multiple IP address blocks, the blocks must not overlap. |
An array of objects. For example:
|
||
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation.
The prefix length for an IPv4 block is between |
||
|
The subnet prefix length to assign to each individual node. For example, if |
A subnet prefix. The default value is |
||
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. |
An array with an IP address block in CIDR format. For example:
|
||
|
The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. |
An array of objects. For example:
|
||
|
Required if you use |
An IP network block in CIDR notation. For example,
|
Optional installation configuration parameters are described in the following table:
Parameter | Description | Values | ||||
---|---|---|---|---|---|---|
|
A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. |
String |
||||
|
Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing. |
String array |
||||
|
Selects an initial set of optional capabilities to enable. Valid values are |
String |
||||
|
Extends the set of optional capabilities beyond what you specify in |
String array |
||||
|
The configuration for the machines that comprise the compute nodes. |
Array of |
||||
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are |
String |
||||
|
Whether to enable or disable simultaneous multithreading, or
|
|
||||
|
Required if you use |
|
||||
|
Required if you use |
|
||||
|
The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
||||
|
Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". |
String. The name of the feature set to enable, such as |
||||
|
The configuration for the machines that comprise the control plane. |
Array of |
||||
|
Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are |
String |
||||
|
Whether to enable or disable simultaneous multithreading, or
|
|
||||
|
Required if you use |
|
||||
|
Required if you use |
|
||||
|
The number of control plane machines to provision. |
The only supported value is |
||||
|
The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.
If you are installing on GCP into a shared virtual private cloud (VPC),
|
|
||||
|
Enable or disable FIPS mode. The default is
|
|
||||
|
Sources and repositories for the release-image content. |
Array of objects. Includes a |
||||
|
Required if you use |
String |
||||
|
Specify one or more repositories that may also contain the same images. |
Array of strings |
||||
|
How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
||||
|
The SSH key to authenticate access to your cluster machines.
|
For example, |
Additional GCP configuration parameters are described in the following table:
Parameter | Description | Values | ||
---|---|---|---|---|
|
The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set |
String. |
||
|
Optional. The name of the GCP project that contains the shared VPC where you want to deploy your cluster. |
String. |
||
|
The name of the GCP project where the installation program installs the cluster. |
String. |
||
|
The name of the GCP region that hosts your cluster. |
Any valid region name, such as |
||
|
The name of the existing subnet where you want to deploy your control plane machines. |
The subnet name. |
||
|
The name of the existing subnet where you want to deploy your compute machines. |
The subnet name. |
||
|
Optional. Set this value to |
|
||
|
Optional. The name of the project that contains the public dns zone. If you set this value, your service account must have the |
The name of the project that contains the public dns zone. |
||
|
Optional. The ID or name of an existing public dns zone. The public dns zone domain must match the |
The public dns zone name. |
||
|
Optional. The name of the project that contains the private dns zone. If you set this value, your service account must have the |
The name of the project that contains the private dns zone. |
||
|
Optional. The ID or name of an existing private dns zone. If you do not set this value, the installation program will create a private dns zone in the service project. |
The private dns zone name. |
||
|
A list of license URLs that must be applied to the compute images.
|
Any license available with the license API, such as the license to enable nested virtualization. You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installation program to copy the source image before use. |
||
|
The availability zones where the installation program creates machines. |
A list of valid GCP availability zones, such as |
||
|
The size of the disk in gigabytes (GB). |
Any size between 16 GB and 65536 GB. |
||
|
The GCP disk type. |
Either the default |
||
|
Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot control plane and compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for both types of machines. |
String. The name of GCP project where the image is located. |
||
|
The name of the custom RHCOS image for the installation program to use to boot control plane and compute machines. If you use |
String. The name of the RHCOS image. |
||
|
Optional. Additional network tags to add to the control plane and compute machines. |
One or more strings, for example |
||
|
The GCP machine type for control plane and compute machines. |
The GCP machine type, for example |
||
|
The name of the customer managed encryption key to be used for machine disk encryption. |
The encryption key name. |
||
|
The name of the Key Management Service (KMS) key ring to which the KMS key belongs. |
The KMS key ring name. |
||
|
The GCP location in which the KMS key ring exists. |
The GCP location. |
||
|
The ID of the project in which the KMS key ring exists. This value defaults to the value of the |
The GCP project ID. |
||
|
The GCP service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google’s documentation on service accounts. |
The GCP service account email, for example |
||
|
The name of the customer managed encryption key to be used for control plane machine disk encryption. |
The encryption key name. |
||
|
For control plane machines, the name of the KMS key ring to which the KMS key belongs. |
The KMS key ring name. |
||
|
For control plane machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google’s documentation on Cloud KMS locations. |
The GCP location for the key ring. |
||
|
For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. |
The GCP project ID. |
||
|
The GCP service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google’s documentation on service accounts. |
The GCP service account email, for example |
||
|
The size of the disk in gigabytes (GB). This value applies to control plane machines. |
Any integer between 16 and 65536. |
||
|
The GCP disk type for control plane machines. |
Control plane machines must use the |
||
|
Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for control plane machines only. |
String. The name of GCP project where the image is located. |
||
|
The name of the custom RHCOS image for the installation program to use to boot control plane machines. If you use |
String. The name of the RHCOS image. |
||
|
Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the |
One or more strings, for example |
||
|
The GCP machine type for control plane machines. If set, this parameter overrides the |
The GCP machine type, for example |
||
|
The availability zones where the installation program creates control plane machines. |
A list of valid GCP availability zones, such as |
||
|
The name of the customer managed encryption key to be used for compute machine disk encryption. |
The encryption key name. |
||
|
For compute machines, the name of the KMS key ring to which the KMS key belongs. |
The KMS key ring name. |
||
|
For compute machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google’s documentation on Cloud KMS locations. |
The GCP location for the key ring. |
||
|
For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. |
The GCP project ID. |
||
|
The GCP service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about GCP service accounts, see Google’s documentation on service accounts. |
The GCP service account email, for example |
||
|
The size of the disk in gigabytes (GB). This value applies to compute machines. |
Any integer between 16 and 65536. |
||
|
The GCP disk type for compute machines. |
Either the default |
||
|
Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for compute machines only. |
String. The name of GCP project where the image is located. |
||
|
The name of the custom RHCOS image for the installation program to use to boot compute machines. If you use |
String. The name of the RHCOS image. |
||
|
Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the |
One or more strings, for example |
||
|
The GCP machine type for compute machines. If set, this parameter overrides the |
The GCP machine type, for example |
||
|
The availability zones where the installation program creates compute machines. |
A list of valid GCP availability zones, such as |
Production environments can deny direct access to the internet and instead have
an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform
cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
file.
You have an existing install-config.yaml
file.
You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy
object’s spec.noProxy
field to bypass the proxy if necessary.
The For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the |
Edit your install-config.yaml
file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 | A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be http . |
2 | A proxy URL to use for creating HTTPS connections outside the cluster. |
3 | A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. |
4 | If provided, the installation program generates a config map that is named user-ca-bundle in
the openshift-config namespace that contains one or more additional CA
certificates that are required for proxying HTTPS connections. The Cluster Network
Operator then creates a trusted-ca-bundle config map that merges these contents
with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the RHCOS trust
bundle. |
5 | Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . |
The installation program does not support the proxy |
If the installer times out, restart and then complete the deployment by using the
|
Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy
settings in the provided install-config.yaml
file. If no proxy settings are
provided, a cluster
Proxy
object is still created, but it will have a nil
spec
.
Only the |
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the |
Configure an account with the cloud platform that hosts your cluster.
Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations:
The GOOGLE_CREDENTIALS
, GOOGLE_CLOUD_KEYFILE_JSON
, or GCLOUD_KEYFILE_JSON
environment variables
The ~/.gcp/osServiceAccount.json
file
The gcloud cli
default credentials
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
location of your customized ./install-config.yaml file. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. |
Optional: You can reduce the number of permissions for the service account that you used to install the cluster.
If you assigned the Owner
role to your service account, you can remove that role and replace it with the Viewer
role.
If you included the Service Account Key Admin
role,
you can remove it.
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin
user.
Credential information also outputs to <installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
You can install the OpenShift CLI (oc
) to interact with OpenShift Container Platform from a
command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the architecture from the Product Variant drop-down list.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.12 Linux Client entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.12 Windows Client entry and save the file.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.12 macOS Client entry and save the file.
For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. |
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OpenShift Container Platform installation.
You deployed an OpenShift Container Platform cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
If the public dns zone exists in a host project outside the project where you installed your cluster, you must manually create dns records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}.
or specific records. You can use A, CNAME, and other records per your requirements.
You completed the installation of OpenShift Container Platform on GCP into a shared VPC.
Your public dns zone exists in a host project separate from the service project that contains your cluster.
Verify that the Ingress router has created a load balancer and populated the EXTERNAL-IP
field by running the following command:
$ oc -n openshift-ingress get service router-default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98
Record the external IP address of the router by running the following command:
$ oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'
Add a record to your GCP public zone with the router’s external IP address and the name *.apps.<cluster_name>.<cluster_domain>
. You can use the gcloud
command line utility or the GCP web console.
To add manual records instead of a wildcard record, create entries for each of the cluster’s current routes. You can gather these routes by running the following command:
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
oauth-openshift.apps.your.cluster.domain.example.com
console-openshift-console.apps.your.cluster.domain.example.com
downloads-openshift-console.apps.your.cluster.domain.example.com
alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com
prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com
See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console.
After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
See About remote health monitoring for more information about the Telemetry service
If necessary, you can opt out of remote health reporting.