This is a cache of https://docs.openshift.com/container-platform/4.12/installing/installing_azure_stack_hub/installing-azure-stack-hub-default.html. It is a snapshot of the page at 2024-11-22T11:09:15.183+0000.
Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure - Installing on Azure Stack Hub | Installing | OpenShift Container Platform 4.12
×

In OpenShift Container Platform version 4.12, you can install a cluster on Microsoft Azure Stack Hub with an installer-provisioned infrastructure. However, you must manually configure the install-config.yaml file to specify values that are specific to Azure Stack Hub.

While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud.

Prerequisites

Internet access for OpenShift Container Platform

In OpenShift Container Platform 4.12, you require access to the internet to install your cluster.

You must have internet access to:

  • Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.

  • Access Quay.io to obtain the packages that are required to install your cluster.

  • Obtain the packages that are required to perform cluster updates.

If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.

Generating a key pair for cluster node SSH access

During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.

Procedure
  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
    1 Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

    If you plan to install an OpenShift Container Platform cluster that uses fips validated or Modules In Process cryptographic libraries on the x86_64, ppc64le, and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"
      Example output
      Agent pid 31874

      If your cluster is in fips mode, only use fips-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> (1)
    1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
    Example output
    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
  • When you install OpenShift Container Platform, provide the SSH public key to the installation program.

Uploading the RHCOS cluster image

You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment.

Prerequisites
  • Configure an Azure account.

Procedure
  1. Obtain the RHCOS VHD cluster image:

    1. Export the URL of the RHCOS VHD to an environment variable.

      $ export COMPRESSED_VHD_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location')
    2. Download the compressed RHCOS VHD file locally.

      $ curl -O -L ${COMPRESSED_VHD_URL}
  2. Decompress the VHD file.

    The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it.

  3. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal.

Obtaining the installation program

Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.

Prerequisites
  • You have a computer that runs Linux or macOS, with 500 MB of local disk space.

Procedure
  1. Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.

  2. Select Azure as the cloud provider.

  3. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.

  4. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  5. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

Manually creating the installation configuration file

Installing the cluster requires that you manually create the installation configuration file.

Prerequisites
  • You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.

  • You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.

Procedure
  1. Create an installation directory to store your required installation assets in:

    $ mkdir <installation_directory>

    You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

  2. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory>.

    You must name this configuration file install-config.yaml.

    Make the following modifications:

    1. Specify the required installation parameters.

    2. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub.

    3. Optional: Update one or more of the default configuration parameters to customize the installation.

      For more information about the parameters, see "Installation configuration parameters".

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    The install-config.yaml file is consumed during the next step of the installation process. You must back it up now.

Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment.

After installation, you cannot modify these parameters in the install-config.yaml file.

Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 1. Required parameters
Parameter Description Values

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installation program may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, nutanix, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

pullSecret

Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io.

{
   "auths":{
      "cloud.openshift.com":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      },
      "quay.io":{
         "auth":"b3Blb=",
         "email":"you@example.com"
      }
   }
}

Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster.

Table 2. Network parameters
Parameter Description Values

networking

The configuration for the cluster network.

Object

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The Red Hat OpenShift Networking network plugin to install.

Either OpenShiftSDN or OVNKubernetes. OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 3. Optional parameters
Parameter Description Values

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

capabilities

Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing.

String array

capabilities.baselineCapabilitySet

Selects an initial set of optional capabilities to enable. Valid values are None, v4.11, v4.12 and vCurrent. The default value is vCurrent.

String

capabilities.additionalEnabledCapabilities

Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet. You may specify multiple capabilities in this parameter.

String array

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, nutanix, openstack, ovirt, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

featureSet

Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates".

String. The name of the feature set to enable, such as TechPreviewNoUpgrade.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

controlPlane.hyperthreading

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, nutanix, openstack, ovirt, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

fips

Enable or disable fips mode. The default is false (disabled). If fips mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

To enable fips mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in fips mode. For more information about configuring fips mode on RHEL, see Installing the system in fips mode. The use of fips validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64, ppc64le, and s390x architectures.

If you are using Azure File storage, you cannot enable fips mode.

false or true

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms.

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key to authenticate access to your cluster machines.

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

For example, sshKey: ssh-ed25519 AAAA...

Additional Azure Stack Hub configuration parameters

Additional Azure configuration parameters are described in the following table:

Table 4. Additional Azure Stack Hub parameters
Parameter Description Values

compute.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 128.

compute.platform.azure.osDisk.diskType

Defines the type of disk.

standard_LRS or premium_LRS. The default is premium_LRS.

compute.platform.azure.type

Defines the azure instance type for compute machines.

String

controlPlane.platform.azure.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 1024.

controlPlane.platform.azure.osDisk.diskType

Defines the type of disk.

premium_LRS.

controlPlane.platform.azure.type

Defines the azure instance type for control plane machines.

String

platform.azure.defaultMachinePlatform.osDisk.diskSizeGB

The Azure disk size for the VM.

Integer that represents the size of the disk in GB. The default is 128.

platform.azure.defaultMachinePlatform.osDisk.diskType

Defines the type of disk.

standard_LRS or premium_LRS. The default is premium_LRS.

platform.azure.defaultMachinePlatform.type

The Azure instance type for control plane and compute machines.

The Azure instance type.

platform.azure.armEndpoint

The URL of the Azure Resource Manager endpoint that your Azure Stack Hub operator provides.

String

platform.azure.baseDomainResourceGroupName

The name of the resource group that contains the DNS zone for your base domain.

String, for example production_cluster.

platform.azure.region

The name of your Azure Stack Hub local region.

String

platform.azure.resourceGroupName

The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group.

String, for example existing_resource_group.

platform.azure.outboundType

The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing.

LoadBalancer or UserDefinedRouting. The default is LoadBalancer.

platform.azure.cloudName

The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints.

AzureStackCloud

clusterOSImage

The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD.

String, for example, https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd

Sample customized install-config.yaml file for Azure Stack Hub

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually.

apiVersion: v1
baseDomain: example.com (1)
credentialsMode: Manual
controlPlane:  (2) (3)
  name: master
  platform:
    azure:
      osDisk:
        diskSizeGB: 1024 (4)
        diskType: premium_LRS
  replicas: 3
compute: (2)
- name: worker
  platform:
    azure:
      osDisk:
        diskSizeGB: 512 (4)
        diskType: premium_LRS
  replicas: 3
metadata:
  name: test-cluster  (1) (5)
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OVNKubernetes (6)
  serviceNetwork:
  - 172.30.0.0/16
platform:
  azure:
    armEndpoint: azurestack_arm_endpoint  (1) (7)
    baseDomainResourceGroupName: resource_group  (1) (8)
    region: azure_stack_local_region  (1) (9)
    resourceGroupName: existing_resource_group (10)
    outboundType: Loadbalancer
    cloudName: AzureStackCloud (1)
    clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd  (1) (11)
pullSecret: '{"auths": ...}'  (1) (12)
fips: false (13)
sshKey: ssh-ed25519 AAAA... (14)
additionalTrustBundle: | (15)
    -----BEGIN CERTIFICATE-----
    <MY_TRUSTED_CA_CERT>
    -----END CERTIFICATE-----
1 Required.
2 If you do not provide these parameters and values, the installation program provides the default value.
3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used.
4 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB.
5 The name of the cluster.
6 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN. The default value is OVNKubernetes.
7 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides.
8 The name of the resource group that contains the DNS zone for your base domain.
9 The name of your Azure Stack Hub local region.
10 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
11 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD.
12 The pull secret required to authenticate your cluster.
13 Whether to enable or disable fips mode. By default, fips mode is not enabled. If fips mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

The use of fips validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64, ppc64le, and s390x architectures.

14 You can optionally provide the sshKey value that you use to access the machines in your cluster.

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

15 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required.

Manually manage cloud credentials

The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider.

Procedure
  1. Generate the manifests by running the following command from the directory that contains the installation program:

    $ openshift-install create manifests --dir <installation_directory>

    where <installation_directory> is the directory in which the installation program creates files.

  2. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command:

    $ openshift-install version
    Example output
    release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64
  3. Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command:

    $ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \
      --credentials-requests \
      --cloud=azure

    This command creates a YAML file for each CredentialsRequest object.

    Sample CredentialsRequest object
    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: <component-credentials-request>
      namespace: openshift-cloud-credential-operator
      ...
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AzureProviderSpec
        roleBindings:
        - role: Contributor
      ...
  4. Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object.

    Sample CredentialsRequest object with secrets
    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: <component-credentials-request>
      namespace: openshift-cloud-credential-operator
      ...
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AzureProviderSpec
        roleBindings:
        - role: Contributor
          ...
      secretRef:
        name: <component-secret>
        namespace: <component-namespace>
      ...
    Sample Secret object
    apiVersion: v1
    kind: Secret
    metadata:
      name: <component-secret>
      namespace: <component-namespace>
    data:
      azure_subscription_id: <base64_encoded_azure_subscription_id>
      azure_client_id: <base64_encoded_azure_client_id>
      azure_client_secret: <base64_encoded_azure_client_secret>
      azure_tenant_id: <base64_encoded_azure_tenant_id>
      azure_resource_prefix: <base64_encoded_azure_resource_prefix>
      azure_resourcegroup: <base64_encoded_azure_resourcegroup>
      azure_region: <base64_encoded_azure_region>

    The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-set: TechPreviewNoUpgrade annotation.

    • If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail.

    • If you are using any of these features, you must create secrets for the corresponding objects.

    • To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command:

      $ grep "release.openshift.io/feature-set" *
      Example output
      0000_30_capi-operator_00_credentials-request.yaml:  release.openshift.io/feature-set: TechPreviewNoUpgrade

    Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.

Configuring the cluster to use an internal CA

If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA.

Prerequisites
  • Create the install-config.yaml file and specify the certificate trust bundle in .pem format.

  • Create the cluster manifests.

Procedure
  1. From the directory in which the installation program creates files, go to the manifests directory.

  2. Add user-ca-bundle to the spec.trustedCA.name field.

    Example cluster-proxy-01-config.yaml file
    apiVersion: config.openshift.io/v1
    kind: Proxy
    metadata:
      creationTimestamp: null
      name: cluster
    spec:
      trustedCA:
        name: user-ca-bundle
    status: {}
  3. Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster.

Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites
  • Configure an account with the cloud platform that hosts your cluster.

  • Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.

  • Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.

Procedure
  • Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ (1)
        --log-level=info (2)
    
    1 For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2 To view different installation details, specify warn, debug, or error instead of info.

    If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

Verification

When the cluster deployment completes successfully:

  • The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user.

  • Credential information also outputs to <installation_directory>/.openshift_install.log.

Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.

If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc.

Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

Procedure
  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.

  2. Select the architecture from the Product Variant drop-down list.

  3. Select the appropriate version from the Version drop-down list.

  4. Click Download Now next to the OpenShift v4.12 Linux Client entry and save the file.

  5. Unpack the archive:

    $ tar xvf <file>
  6. Place the oc binary in a directory that is on your PATH.

    To check your PATH, execute the following command:

    $ echo $PATH
Verification
  • After you install the OpenShift CLI, it is available using the oc command:

    $ oc <command>

Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

Procedure
  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.

  2. Select the appropriate version from the Version drop-down list.

  3. Click Download Now next to the OpenShift v4.12 Windows Client entry and save the file.

  4. Unzip the archive with a ZIP program.

  5. Move the oc binary to a directory that is on your PATH.

    To check your PATH, open the command prompt and execute the following command:

    C:\> path
Verification
  • After you install the OpenShift CLI, it is available using the oc command:

    C:\> oc <command>

Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

Procedure
  1. Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.

  2. Select the appropriate version from the Version drop-down list.

  3. Click Download Now next to the OpenShift v4.12 macOS Client entry and save the file.

    For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry.

  4. Unpack and unzip the archive.

  5. Move the oc binary to a directory on your PATH.

    To check your PATH, open a terminal and execute the following command:

    $ echo $PATH
Verification
  • After you install the OpenShift CLI, it is available using the oc command:

    $ oc <command>

Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites
  • You deployed an OpenShift Container Platform cluster.

  • You installed the oc CLI.

Procedure
  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami
    Example output
    system:admin

Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.

Prerequisites
  • You have access to the installation host.

  • You completed a cluster installation and all cluster Operators are available.

Procedure
  1. Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host:

    $ cat <installation_directory>/auth/kubeadmin-password

    Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host.

  2. List the OpenShift Container Platform web console route:

    $ oc get routes -n openshift-console | grep 'console-openshift'

    Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host.

    Example output
    console     console-openshift-console.apps.<cluster_name>.<base_domain>            console     https   reencrypt/Redirect   None
  3. Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user.

Additional resources

Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console.

After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.

Additional resources