This is a cache of https://docs.openshift.com/container-platform/4.17/installing/installing_azure/ipi/installing-azure-vnet.html. It is a snapshot of the page at 2024-11-14T06:08:27.490+0000.
Installing a cluster into an existing VNet - Installing on Azure | Installing | OpenShift Container Platform 4.17
×

About reusing a VNet for your OpenShift Container Platform cluster

In OpenShift Container Platform 4.17, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.

By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.

Requirements for using your VNet

When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:

  • Subnets

  • Route tables

  • VNets

  • Network Security Groups

The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.

If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.

The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.

Your VNet must meet the following characteristics:

  • The VNet’s CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines.

  • The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.

You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.

By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region. To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones.

To ensure that the subnets that you provide are suitable, the installation program confirms the following data:

  • All the specified subnets exist.

  • There are two private subnets, one for the control plane machines and one for the compute machines.

  • The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them.

If you destroy a cluster that uses an existing VNet, the VNet is not deleted.

Network security group requirements

The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.

The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails.

Table 1. Required ports
Port Description Control plane Compute

80

Allows HTTP traffic

x

443

Allows HTTPS traffic

x

6443

Allows communication to the control plane machines

x

22623

Allows internal communication to the machine config server for provisioning machines

x

  1. If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs. A network security group rule is not needed. For more information, see "Configuring your firewall" in "Additional resources".

Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates.

To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies.

Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment.

Table 2. Ports used for all-machine to all-machine communications
Protocol Port Description

ICMP

N/A

Network reachability tests

TCP

1936

Metrics

9000-9999

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

10250-10259

The default ports that Kubernetes reserves

UDP

4789

VXLAN

6081

Geneve

9000-9999

Host level services, including the node exporter on ports 9100-9101.

500

IPsec IKE packets

4500

IPsec NAT-T packets

123

Network Time Protocol (NTP) on UDP port 123

If you configure an external NTP time server, you must open UDP port 123.

TCP/UDP

30000-32767

Kubernetes node port

ESP

N/A

IPsec Encapsulating Security Payload (ESP)

Table 3. Ports used for control plane machine to control plane machine communications
Protocol Port Description

TCP

2379-2380

etcd server and peer ports

Division of permissions

Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.

The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.

Isolation between clusters

Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet.

Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.

Prerequisites
  • You have the OpenShift Container Platform installation program and the pull secret for your cluster.

  • You have an Azure subscription ID and tenant ID.

  • If you are installing the cluster using a service principal, you have its application ID and password.

  • If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from.

  • If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites:

    • You have its client ID.

    • You have assigned it to the virtual machine that you will run the installation program from.

Procedure
  1. Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file.

    Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation.

  2. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> (1)
      1 For <installation_directory>, specify the directory name to store the files that the installation program creates.

      When specifying the directory:

      • Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory.

      • Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select azure as the platform to target.

        If the installation program cannot locate the osServicePrincipal.json configuration file from a previous installation, you are prompted for Azure subscription and authentication values.

      3. Enter the following Azure parameter values for your subscription:

        • azure subscription id: Enter the subscription ID to use for the cluster.

        • azure tenant id: Enter the tenant ID.

      4. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id:

        • If you are using a service principal, enter its application ID.

        • If you are using a system-assigned managed identity, leave this value blank.

        • If you are using a user-assigned managed identity, specify its client ID.

      5. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret:

        • If you are using a service principal, enter its password.

        • If you are using a system-assigned managed identity, leave this value blank.

        • If you are using a user-assigned managed identity, leave this value blank.

      6. Select the region to deploy the cluster to.

      7. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.

      8. Enter a descriptive name for your cluster.

        All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.

  3. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.

  4. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform.

Minimum resource requirements for cluster installation

Each cluster machine must meet the following minimum requirements:

Table 4. Minimum resource requirements
Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS)[2]

Bootstrap

RHCOS

4

16 GB

100 GB

300

Control plane

RHCOS

4

16 GB

100 GB

300

Compute

RHCOS, RHEL 8.6 and later [3]

2

8 GB

100 GB

300

  1. One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.

  2. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.

  3. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.

As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:

  • x86-64 architecture requires x86-64-v2 ISA

  • ARM64 architecture requires ARMv8.0-A ISA

  • IBM Power architecture requires Power 9 ISA

  • s390x architecture requires z14 ISA

For more information, see RHEL Architectures.

You are required to use Azure virtual machines that have the premiumIO parameter set to true.

If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.

Additional resources

Tested instance types for Azure

The following Microsoft Azure instance types have been tested with OpenShift Container Platform.

Machine types based on 64-bit x86 architecture
  • standardBasv2Family

  • standardBSFamily

  • standardBsv2Family

  • standardDADSv5Family

  • standardDASv4Family

  • standardDASv5Family

  • standardDCACCV5Family

  • standardDCADCCV5Family

  • standardDCADSv5Family

  • standardDCASv5Family

  • standardDCSv3Family

  • standardDCSv2Family

  • standardDDCSv3Family

  • standardDDSv4Family

  • standardDDSv5Family

  • standardDLDSv5Family

  • standardDLSv5Family

  • standardDSFamily

  • standardDSv2Family

  • standardDSv2PromoFamily

  • standardDSv3Family

  • standardDSv4Family

  • standardDSv5Family

  • standardEADSv5Family

  • standardEASv4Family

  • standardEASv5Family

  • standardEBDSv5Family

  • standardEBSv5Family

  • standardECACCV5Family

  • standardECADCCV5Family

  • standardECADSv5Family

  • standardECASv5Family

  • standardEDSv4Family

  • standardEDSv5Family

  • standardEIADSv5Family

  • standardEIASv4Family

  • standardEIASv5Family

  • standardEIBDSv5Family

  • standardEIBSv5Family

  • standardEIDSv5Family

  • standardEISv3Family

  • standardEISv5Family

  • standardESv3Family

  • standardESv4Family

  • standardESv5Family

  • standardFXMDVSFamily

  • standardFSFamily

  • standardFSv2Family

  • standardGSFamily

  • standardHBrsv2Family

  • standardHBSFamily

  • standardHBv4Family

  • standardHCSFamily

  • standardHXFamily

  • standardLASv3Family

  • standardLSFamily

  • standardLSv2Family

  • standardLSv3Family

  • standardMDSHighMemoryv3Family

  • standardMDSMediumMemoryv2Family

  • standardMDSMediumMemoryv3Family

  • standardMIDSHighMemoryv3Family

  • standardMIDSMediumMemoryv2Family

  • standardMISHighMemoryv3Family

  • standardMISMediumMemoryv2Family

  • standardMSFamily

  • standardMSHighMemoryv3Family

  • standardMSMediumMemoryv2Family

  • standardMSMediumMemoryv3Family

  • StandardNCADSA100v4Family

  • Standard NCASv3_T4 Family

  • standardNCSv3Family

  • standardNDSv2Family

  • StandardNGADSV620v1Family

  • standardNPSFamily

  • StandardNVADSA10v5Family

  • standardNVSv3Family

  • standardXEISv4Family

Tested instance types for Azure on 64-bit ARM infrastructures

The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform.

Machine types based on 64-bit ARM architecture
  • standardBpsv2Family

  • standardDPSv5Family

  • standardDPDSv5Family

  • standardDPLDSv5Family

  • standardDPLSv5Family

  • standardEPSv5Family

  • standardEPDSv5Family

Enabling trusted launch for Azure VMs

You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules.

For more information about the sizes of virtual machines that support the trusted launch features, see Virtual machine sizes.

Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites
  • You have created an install-config.yaml file.

Procedure
  • Edit the install-config.yaml file before deploying your cluster:

    • Enable trusted launch only on control plane by adding the following stanza:

      controlPlane:
        platform:
          azure:
            settings:
              securityType: TrustedLaunch
              trustedLaunch:
                uefiSettings:
                  secureBoot: Enabled
                  virtualizedTrustedPlatformModule: Enabled
    • Enable trusted launch only on compute node by adding the following stanza:

      compute:
        platform:
          azure:
            settings:
              securityType: TrustedLaunch
              trustedLaunch:
                uefiSettings:
                  secureBoot: Enabled
                  virtualizedTrustedPlatformModule: Enabled
    • Enable trusted launch on all nodes by adding the following stanza:

      platform:
        azure:
          settings:
            securityType: TrustedLaunch
            trustedLaunch:
              uefiSettings:
                secureBoot: Enabled
                virtualizedTrustedPlatformModule: Enabled

Enabling confidential VMs

You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes.

Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can use confidential VMs with the following VM sizes:

  • DCasv5-series

  • DCadsv5-series

  • ECasv5-series

  • ECadsv5-series

Confidential VMs are currently not supported on 64-bit ARM architectures.

Prerequisites
  • You have created an install-config.yaml file.

Procedure
  • Edit the install-config.yaml file before deploying your cluster:

    • Enable confidential VMs only on control plane by adding the following stanza:

      controlPlane:
        platform:
          azure:
            settings:
              securityType: ConfidentialVM
              confidentialVM:
                uefiSettings:
                  secureBoot: Enabled
                  virtualizedTrustedPlatformModule: Enabled
            osDisk:
              securityProfile:
                securityEncryptionType: VMGuestStateOnly
    • Enable confidential VMs only on compute nodes by adding the following stanza:

      compute:
        platform:
          azure:
            settings:
              securityType: ConfidentialVM
              confidentialVM:
                uefiSettings:
                  secureBoot: Enabled
                  virtualizedTrustedPlatformModule: Enabled
            osDisk:
              securityProfile:
                securityEncryptionType: VMGuestStateOnly
    • Enable confidential VMs on all nodes by adding the following stanza:

      platform:
        azure:
          settings:
            securityType: ConfidentialVM
            confidentialVM:
              uefiSettings:
                secureBoot: Enabled
                virtualizedTrustedPlatformModule: Enabled
          osDisk:
            securityProfile:
              securityEncryptionType: VMGuestStateOnly

Sample customized install-config.yaml file for Azure

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.

apiVersion: v1
baseDomain: example.com (1)
controlPlane: (2)
  hyperthreading: Enabled  (3) (4)
  name: master
  platform:
    azure:
      encryptionAtHost: true
      ultraSSDCapability: Enabled
      osDisk:
        diskSizeGB: 1024 (5)
        diskType: Premium_LRS
        diskEncryptionSet:
          resourceGroup: disk_encryption_set_resource_group
          name: disk_encryption_set_name
          subscriptionId: secondary_subscription_id
      osImage:
        publisher: example_publisher_name
        offer: example_image_offer
        sku: example_offer_sku
        version: example_image_version
      type: Standard_D8s_v3
  replicas: 3
compute: (2)
- hyperthreading: Enabled  (3) (4)
  name: worker
  platform:
    azure:
      ultraSSDCapability: Enabled
      type: Standard_D2s_v3
      encryptionAtHost: true
      osDisk:
        diskSizeGB: 512 (5)
        diskType: Standard_LRS
        diskEncryptionSet:
          resourceGroup: disk_encryption_set_resource_group
          name: disk_encryption_set_name
          subscriptionId: secondary_subscription_id
      osImage:
        publisher: example_publisher_name
        offer: example_image_offer
        sku: example_offer_sku
        version: example_image_version
      zones: (6)
      - "1"
      - "2"
      - "3"
  replicas: 5
metadata:
  name: test-cluster (1)
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OVNKubernetes (7)
  serviceNetwork:
  - 172.30.0.0/16
platform:
  azure:
    defaultMachinePlatform:
      osImage: (8)
        publisher: example_publisher_name
        offer: example_image_offer
        sku: example_offer_sku
        version: example_image_version
      ultraSSDCapability: Enabled
    baseDomainResourceGroupName: resource_group (9)
    region: centralus (1)
    resourceGroupName: existing_resource_group (10)
    networkResourceGroupName: vnet_resource_group (11)
    virtualNetwork: vnet (12)
    controlPlaneSubnet: control_plane_subnet (13)
    computeSubnet: compute_subnet (14)
    outboundType: Loadbalancer
    cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}' (1)
fips: false (15)
sshKey: ssh-ed25519 AAAA... (16)
1 Required. The installation program prompts you for this value.
2 If you do not provide these parameters and values, the installation program provides the default value.
3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
4 Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3, for your machines if you disable simultaneous multithreading.

5 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB.
6 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
7 The cluster network plugin to install. The default value OVNKubernetes is the only supported value.
8 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher, offer, sku, and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters.
9 Specify the name of the resource group that contains the DNS zone for your base domain.
10 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
11 If you use an existing VNet, specify the name of the resource group that contains it.
12 If you use an existing VNet, specify its name.
13 If you use an existing VNet, specify the name of the subnet to host the control plane machines.
14 If you use an existing VNet, specify the name of the subnet to host the compute machines.
15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode.

When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.

16 You can optionally provide the sshKey value that you use to access the machines in your cluster.

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Prerequisites
  • You have an existing install-config.yaml file.

  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure
  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
      httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
      noProxy: example.com (3)
    additionalTrustBundle: | (4)
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
    1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2 A proxy URL to use for creating HTTPS connections outside the cluster.
    3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
    5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always. Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly.

    The installation program does not support the proxy readinessEndpoints field.

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Only the Proxy object named cluster is supported, and no additional proxies can be created.

Additional resources

Alternatives to storing administrator-level secrets in the kube-system project

By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual, you must use one of the following alternatives:

Manually creating long-term credentials

The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace.

Procedure
  1. If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual, modify the value as shown:

    Sample configuration file snippet
    apiVersion: v1
    baseDomain: example.com
    credentialsMode: Manual
    # ...
  2. If you have not previously created installation manifest files, do so by running the following command:

    $ openshift-install create manifests --dir <installation_directory>

    where <installation_directory> is the directory in which the installation program creates files.

  3. Set a $RELEASE_IMAGE variable with the release image from your installation file by running the following command:

    $ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
  4. Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command:

    $ oc adm release extract \
      --from=$RELEASE_IMAGE \
      --credentials-requests \
      --included \(1)
      --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
      --to=<path_to_directory_for_credentials_requests> (3)
    1 The --included parameter includes only the manifests that your specific cluster configuration requires.
    2 Specify the location of the install-config.yaml file.
    3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it.

    This command creates a YAML file for each CredentialsRequest object.

    Sample CredentialsRequest object
    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: <component_credentials_request>
      namespace: openshift-cloud-credential-operator
      ...
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AzureProviderSpec
        roleBindings:
        - role: Contributor
      ...
  5. Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object.

    Sample CredentialsRequest object with secrets
    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: <component_credentials_request>
      namespace: openshift-cloud-credential-operator
      ...
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AzureProviderSpec
        roleBindings:
        - role: Contributor
          ...
      secretRef:
        name: <component_secret>
        namespace: <component_namespace>
      ...
    Sample Secret object
    apiVersion: v1
    kind: Secret
    metadata:
      name: <component_secret>
      namespace: <component_namespace>
    data:
      azure_subscription_id: <base64_encoded_azure_subscription_id>
      azure_client_id: <base64_encoded_azure_client_id>
      azure_client_secret: <base64_encoded_azure_client_secret>
      azure_tenant_id: <base64_encoded_azure_tenant_id>
      azure_resource_prefix: <base64_encoded_azure_resource_prefix>
      azure_resourcegroup: <base64_encoded_azure_resourcegroup>
      azure_region: <base64_encoded_azure_region>

Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.

Configuring an Azure cluster to use short-term credentials

To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster.

Configuring the Cloud Credential Operator utility

To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl) binary.

The ccoctl utility is a Linux binary that must run in a Linux environment.

Prerequisites
  • You have access to an OpenShift Container Platform account with cluster administrator access.

  • You have installed the OpenShift CLI (oc).

  • You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions:

    Required Azure permissions
    • Microsoft.Resources/subscriptions/resourceGroups/read

    • Microsoft.Resources/subscriptions/resourceGroups/write

    • Microsoft.Resources/subscriptions/resourceGroups/delete

    • Microsoft.Authorization/roleAssignments/read

    • Microsoft.Authorization/roleAssignments/delete

    • Microsoft.Authorization/roleAssignments/write

    • Microsoft.Authorization/roleDefinitions/read

    • Microsoft.Authorization/roleDefinitions/write

    • Microsoft.Authorization/roleDefinitions/delete

    • Microsoft.Storage/storageAccounts/listkeys/action

    • Microsoft.Storage/storageAccounts/delete

    • Microsoft.Storage/storageAccounts/read

    • Microsoft.Storage/storageAccounts/write

    • Microsoft.Storage/storageAccounts/blobServices/containers/write

    • Microsoft.Storage/storageAccounts/blobServices/containers/delete

    • Microsoft.Storage/storageAccounts/blobServices/containers/read

    • Microsoft.ManagedIdentity/userAssignedIdentities/delete

    • Microsoft.ManagedIdentity/userAssignedIdentities/read

    • Microsoft.ManagedIdentity/userAssignedIdentities/write

    • Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read

    • Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write

    • Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete

    • Microsoft.Storage/register/action

    • Microsoft.ManagedIdentity/register/action

Procedure
  1. Set a variable for the OpenShift Container Platform release image by running the following command:

    $ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
  2. Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:

    $ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)

    Ensure that the architecture of the $RELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool.

  3. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command:

    $ oc image extract $CCO_IMAGE \
      --file="/usr/bin/ccoctl.<rhel_version>" \(1)
      -a ~/.pull-secret
    1 For <rhel_version>, specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid:
    • rhel8: Specify this value for hosts that use RHEL 8.

    • rhel9: Specify this value for hosts that use RHEL 9.

  4. Change the permissions to make ccoctl executable by running the following command:

    $ chmod 775 ccoctl.<rhel_version>
Verification
  • To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example:

    $ ./ccoctl.rhel9
    Example output
    OpenShift credentials provisioning tool
    
    Usage:
      ccoctl [command]
    
    Available Commands:
      aws          Manage credentials objects for AWS cloud
      azure        Manage credentials objects for Azure
      gcp          Manage credentials objects for Google cloud
      help         Help about any command
      ibmcloud     Manage credentials objects for {ibm-cloud-title}
      nutanix      Manage credentials objects for Nutanix
    
    Flags:
      -h, --help   help for ccoctl
    
    Use "ccoctl [command] --help" for more information about a command.

Creating Azure resources with the Cloud Credential Operator utility

You can use the ccoctl azure create-all command to automate the creation of Azure resources.

By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.

Prerequisites

You must have:

  • Extracted and prepared the ccoctl binary.

  • Access to your Microsoft Azure account by using the Azure CLI.

Procedure
  1. Set a $RELEASE_IMAGE variable with the release image from your installation file by running the following command:

    $ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
  2. Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command:

    $ oc adm release extract \
      --from=$RELEASE_IMAGE \
      --credentials-requests \
      --included \(1)
      --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
      --to=<path_to_directory_for_credentials_requests> (3)
    1 The --included parameter includes only the manifests that your specific cluster configuration requires.
    2 Specify the location of the install-config.yaml file.
    3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it.

    This command might take a few moments to run.

  3. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command:

    $ az login
  4. Use the ccoctl tool to process all CredentialsRequest objects by running the following command:

    $ ccoctl azure create-all \
      --name=<azure_infra_name> \(1)
      --output-dir=<ccoctl_output_dir> \(2)
      --region=<azure_region> \(3)
      --subscription-id=<azure_subscription_id> \(4)
      --credentials-requests-dir=<path_to_credentials_requests_directory> \(5)
      --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \(6)
      --tenant-id=<azure_tenant_id> (7)
    1 Specify the user-defined name for all created Azure resources used for tracking.
    2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run.
    3 Specify the Azure region in which cloud resources will be created.
    4 Specify the Azure subscription ID to use.
    5 Specify the directory containing the files for the component CredentialsRequest objects.
    6 Specify the name of the resource group containing the cluster’s base domain Azure DNS zone.
    7 Specify the Azure tenant ID to use.

    If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter.

    To see additional optional parameters and explanations of how to use them, run the azure create-all --help command.

Verification
  • To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory:

    $ ls <path_to_ccoctl_output_dir>/manifests
    Example output
    azure-ad-pod-identity-webhook-config.yaml
    cluster-authentication-02-config.yaml
    openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml
    openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml
    openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml
    openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml
    openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml
    openshift-image-registry-installer-cloud-credentials-credentials.yaml
    openshift-ingress-operator-cloud-credentials-credentials.yaml
    openshift-machine-api-azure-cloud-credentials-credentials.yaml

    You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts.

Incorporating the Cloud Credential Operator utility manifests

To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl) created to the correct directories for the installation program.

Prerequisites
  • You have configured an account with the cloud platform that hosts your cluster.

  • You have configured the Cloud Credential Operator utility (ccoctl).

  • You have created the cloud provider resources that are required for your cluster with the ccoctl utility.

Procedure
  1. If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual, modify the value as shown:

    Sample configuration file snippet
    apiVersion: v1
    baseDomain: example.com
    credentialsMode: Manual
    # ...
  2. If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown:

    Sample configuration file snippet
    apiVersion: v1
    baseDomain: example.com
    # ...
    platform:
      azure:
        resourceGroupName: <azure_infra_name> (1)
    # ...
    1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command.
  3. If you have not previously created installation manifest files, do so by running the following command:

    $ openshift-install create manifests --dir <installation_directory>

    where <installation_directory> is the directory in which the installation program creates files.

  4. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command:

    $ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
  5. Copy the tls directory that contains the private key to the installation directory:

    $ cp -a /<path_to_ccoctl_output_dir>/tls .

Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform.

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites
  • You have configured an account with the cloud platform that hosts your cluster.

  • You have the OpenShift Container Platform installation program and the pull secret for your cluster.

  • You have an Azure subscription ID and tenant ID.

Procedure
  • Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ (1)
        --log-level=info (2)
    
    1 For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2 To view different installation details, specify warn, debug, or error instead of info.
Verification

When the cluster deployment completes successfully:

  • The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user.

  • Credential information also outputs to <installation_directory>/.openshift_install.log.

Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Additional resources
  • See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.

Next steps