This is a cache of https://docs.okd.io/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-bm.html. It is a snapshot of the page at 2025-08-31T00:59:16.811+0000.
Deploying hosted control planes on bare metal - Deploying hosted control planes | Hosted control planes | OKD 4.17
×

You can deploy hosted control planes by configuring a cluster to function as a management cluster. The management cluster is the OKD cluster where the control planes are hosted. In some contexts, the management cluster is also known as the hosting cluster.

The management cluster is not the same thing as the managed cluster. A managed cluster is a cluster that the hub cluster manages.

The hosted control planes feature is enabled by default.

The multicluster engine Operator supports only the default local-cluster, which is a hub cluster that is managed, and the hub cluster as the management cluster. If you have Red Hat Advanced Cluster Management installed, you can use the managed hub cluster, also known as the local-cluster, as the management cluster.

A hosted cluster is an OKD cluster with its API endpoint and control plane that are hosted on the management cluster. The hosted cluster includes the control plane and its corresponding data plane. You can use the multicluster engine Operator console or the hosted control plane command-line interface (hcp) to create a hosted cluster.

The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator".

Preparing to deploy hosted control planes on bare metal

As you prepare to deploy hosted control planes on bare metal, consider the following information:

  • Run the management cluster and workers on the same platform for hosted control planes.

  • All bare metal hosts require a manual start with a Discovery Image ISO that the central infrastructure management provides. You can start the hosts manually or through automation by using Cluster-Baremetal-Operator. After each host starts, it runs an Agent process to discover the host details and complete the installation. An Agent custom resource represents each host.

  • When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using logical volume manager storage".

Prerequisites to configure a management cluster

  • You need the multicluster engine for Kubernetes Operator 2.2 and later installed on an OKD cluster. You can install multicluster engine Operator as an Operator from the OKD OperatorHub.

  • The multicluster engine Operator must have at least one managed OKD cluster. The local-cluster is automatically imported in multicluster engine Operator 2.2 and later. For more information about the local-cluster, see Advanced configuration in the Red Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:

    $ oc get managedclusters local-cluster
  • You must add the topology.kubernetes.io/zone label to your bare-metal hosts on your management cluster. Ensure that each host has a unique value for topology.kubernetes.io/zone. Otherwise, all of the hosted control plane pods are scheduled on a single node, causing a single point of failure.

  • To provision hosted control planes on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see Enabling the central infrastructure management service.

  • You need to install the hosted control plane command-line interface.

Bare metal firewall, port, and service requirements

You must meet the firewall, port, and service requirements so that ports can communicate between the management cluster, the control plane, and hosted clusters.

Services run on their default ports. However, if you use the NodePort publishing strategy, services run on the port that is assigned by the NodePort service.

Use firewall rules, security groups, or other access controls to restrict access to only required sources. Avoid exposing ports publicly unless necessary. For production deployments, use a load balancer to simplify access through a single IP address.

If your hub cluster has a proxy configuration, ensure that it can reach the hosted cluster API endpoint by adding all hosted cluster API endpoints to the noProxy field on the Proxy object. For more information, see "Configuring the cluster-wide proxy".

A hosted control plane exposes the following services on bare metal:

  • APIServer

    • The APIServer service runs on port 6443 by default and requires ingress access for communication between the control plane components.

    • If you use MetalLB load balancing, allow ingress access to the IP range that is used for load balancer IP addresses.

  • OAuthServer

    • The OAuthServer service runs on port 443 by default when you use the route and ingress to expose the service.

    • If you use the NodePort publishing strategy, use a firewall rule for the OAuthServer service.

  • Konnectivity

    • The Konnectivity service runs on port 443 by default when you use the route and ingress to expose the service.

    • The Konnectivity agent establishes a reverse tunnel to allow the control plane to access the network for the hosted cluster. The agent uses egress to connect to the Konnectivity server. The server is exposed by using either a route on port 443 or a manually assigned NodePort.

    • If the cluster API server address is an internal IP address, allow access from the workload subnets to the IP address on port 6443.

    • If the address is an external IP address, allow egress on port 6443 to that external IP address from the nodes.

  • Ignition

    • The Ignition service runs on port 443 by default when you use the route and ingress to expose the service.

    • If you use the NodePort publishing strategy, use a firewall rule for the Ignition service.

You do not need the following services on bare metal:

  • OVNSbDb

  • OIDC

Additional resources

Bare metal infrastructure requirements

The Agent platform does not create any infrastructure, but it does have the following requirements for infrastructure:

  • Agents: An Agent represents a host that is booted with a discovery image and is ready to be provisioned as an OKD node.

  • DNS: The API and ingress endpoints must be routable.

DNS configurations on bare metal

The API Server for the hosted cluster is exposed as a NodePort service. A DNS entry must exist for api.<hosted_cluster_name>.<base_domain> that points to destination where the API Server can be reached.

The DNS entry can be as simple as a record that points to one of the nodes in the management cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods.

Example DNS configuration
api.example.krnl.es.    IN A 192.168.122.20
api.example.krnl.es.    IN A 192.168.122.21
api.example.krnl.es.    IN A 192.168.122.22
api-int.example.krnl.es.    IN A 192.168.122.20
api-int.example.krnl.es.    IN A 192.168.122.21
api-int.example.krnl.es.    IN A 192.168.122.22
`*`.apps.example.krnl.es. IN A 192.168.122.23

In the previous example, *.apps.example.krnl.es. IN A 192.168.122.23 is either a node in the hosted cluster or a load balancer, if one has been configured.

If you are configuring DNS for a disconnected environment on an IPv6 network, the configuration looks like the following example.

Example DNS configuration for an IPv6 network
api.example.krnl.es.    IN A 2620:52:0:1306::5
api.example.krnl.es.    IN A 2620:52:0:1306::6
api.example.krnl.es.    IN A 2620:52:0:1306::7
api-int.example.krnl.es.    IN A 2620:52:0:1306::5
api-int.example.krnl.es.    IN A 2620:52:0:1306::6
api-int.example.krnl.es.    IN A 2620:52:0:1306::7
`*`.apps.example.krnl.es. IN A 2620:52:0:1306::10

If you are configuring DNS for a disconnected environment on a dual stack network, be sure to include DNS entries for both IPv4 and IPv6.

Example DNS configuration for a dual stack network
host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10
host-record=api.hub-dual.dns.base.domain.name,192.168.126.10
address=/apps.hub-dual.dns.base.domain.name/192.168.126.11
dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20
dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21
dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22
dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25
dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26

host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2
host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2
address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3
dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5]
dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6]
dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7]
dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8]
dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9]

Creating an InfraEnv resource

Before you can create a hosted cluster on bare metal, you need an InfraEnv resource.

Creating an InfraEnv resource and adding nodes

On hosted control planes, the control-plane components run as pods on the management cluster while the data plane runs on dedicated nodes. You can use the Assisted Service to boot your hardware with a discovery ISO that adds your hardware to a hardware inventory. Later, when you create a hosted cluster, the hardware from the inventory is used to provision the data-plane nodes. The object that is used to get the discovery ISO is an InfraEnv resource. You need to create a BareMetalHost object that configures the cluster to boot the bare-metal node from the discovery ISO.

Procedure
  1. Create a namespace to store your hardware inventory by entering the following command:

    $ oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig create \
      namespace <namespace_example>

    where:

    <directory_example>

    Is the name of the directory where the kubeconfig file for the management cluster is saved.

    <namespace_example>

    Is the name of the namespace that you are creating; for example, hardware-inventory.

    Example output
    namespace/hardware-inventory created
  2. Copy the pull secret of the management cluster by entering the following command:

    $ oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig \
      -n openshift-config get secret pull-secret -o yaml \
      | grep -vE "uid|resourceVersion|creationTimestamp|namespace" \
      | sed "s/openshift-config/<namespace_example>/g" \
      | oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig \
      -n <namespace> apply -f -

    where:

    <directory_example>

    Is the name of the directory where the kubeconfig file for the management cluster is saved.

    <namespace_example>

    Is the name of the namespace that you are creating; for example, hardware-inventory.

    Example output
    secret/pull-secret created
  3. Create the InfraEnv resource by adding the following content to a YAML file:

    apiVersion: agent-install.openshift.io/v1beta1
    kind: InfraEnv
    metadata:
      name: hosted
      namespace: <namespace_example>
    spec:
      additionalNTPSources:
      - <ip_address>
      pullSecretRef:
        name: pull-secret
      sshAuthorizedKey: <ssh_public_key>
    # ...
  4. Apply the changes to the YAML file by entering the following command:

    $ oc apply -f <infraenv_config>.yaml

    Replace <infraenv_config> with the name of your file.

  5. Verify that the InfraEnv resource was created by entering the following command:

    $ oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig \
      -n <namespace_example> get infraenv hosted
  6. Add bare-metal hosts by following one of two methods:

    • If you do not use the Metal3 Operator, obtain the discovery ISO from the InfraEnv resource and boot the hosts manually by completing the following steps:

      1. Download the live ISO by entering the following commands:

        $ oc get infraenv -A
        $ oc get infraenv <namespace_example> -o jsonpath='{.status.isoDownloadURL}' -n <namespace_example> <iso_url>
      2. Boot the ISO. The node communicates with the Assisted Service and registers as an agent in the same namespace as the InfraEnv resource.

      3. For each agent, set the installation disk ID and hostname, and approve it to indicate that the agent is ready for use. Enter the following commands:

        $ oc -n <hosted_control_plane_namespace> get agents
        Example output
        NAME                                   CLUSTER   APPROVED   ROLE          STAGE
        86f7ac75-4fc4-4b36-8130-40fa12602218                        auto-assign
        e57a637f-745b-496e-971d-1abbf03341ba                        auto-assign
        $ oc -n <hosted_control_plane_namespace> \
          patch agent 86f7ac75-4fc4-4b36-8130-40fa12602218 \
          -p '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-0.example.krnl.es"}}' \
          --type merge
        $ oc -n <hosted_control_plane_namespace> \
          patch agent 23d0c614-2caa-43f5-b7d3-0b3564688baa -p \
          '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-1.example.krnl.es"}}' \
          --type merge
        $ oc -n <hosted_control_plane_namespace> get agents
        Example output
        NAME                                   CLUSTER   APPROVED   ROLE          STAGE
        86f7ac75-4fc4-4b36-8130-40fa12602218             true       auto-assign
        e57a637f-745b-496e-971d-1abbf03341ba             true       auto-assign
    • If you use the Metal3 Operator, you can automate the bare-metal host registration by creating the following objects:

      1. Create a YAML file and add the following content to it:

        apiVersion: v1
        kind: Secret
        metadata:
          name: hosted-worker0-bmc-secret
          namespace: <namespace_example>
        data:
          password: <password>
          username: <username>
        type: Opaque
        ---
        apiVersion: v1
        kind: Secret
        metadata:
          name: hosted-worker1-bmc-secret
          namespace: <namespace_example>
        data:
          password: <password>
          username: <username>
        type: Opaque
        ---
        apiVersion: v1
        kind: Secret
        metadata:
          name: hosted-worker2-bmc-secret
          namespace: <namespace_example>
        data:
          password: <password>
          username: <username>
        type: Opaque
        ---
        apiVersion: metal3.io/v1alpha1
        kind: BareMetalHost
        metadata:
          name: hosted-worker0
          namespace: <namespace_example>
          labels:
            infraenvs.agent-install.openshift.io: hosted
          annotations:
            inspect.metal3.io: disabled
            bmac.agent-install.openshift.io/hostname: hosted-worker0
        spec:
          automatedCleaningMode: disabled
          bmc:
            disableCertificateVerification: True
            address: <bmc_address>
            credentialsName: hosted-worker0-bmc-secret
          bootMACAddress: aa:aa:aa:aa:02:01
          online: true
        ---
        apiVersion: metal3.io/v1alpha1
        kind: BareMetalHost
        metadata:
          name: hosted-worker1
          namespace: <namespace_example>
          labels:
            infraenvs.agent-install.openshift.io: hosted
          annotations:
            inspect.metal3.io: disabled
            bmac.agent-install.openshift.io/hostname: hosted-worker1
        spec:
          automatedCleaningMode: disabled
          bmc:
            disableCertificateVerification: True
            address: <bmc_address>
            credentialsName: hosted-worker1-bmc-secret
          bootMACAddress: aa:aa:aa:aa:02:02
          online: true
        ---
        apiVersion: metal3.io/v1alpha1
        kind: BareMetalHost
        metadata:
          name: hosted-worker2
          namespace: <namespace_example>
          labels:
            infraenvs.agent-install.openshift.io: hosted
          annotations:
            inspect.metal3.io: disabled
            bmac.agent-install.openshift.io/hostname: hosted-worker2
        spec:
          automatedCleaningMode: disabled
          bmc:
            disableCertificateVerification: True
            address: <bmc_address>
            credentialsName: hosted-worker2-bmc-secret
          bootMACAddress: aa:aa:aa:aa:02:03
          online: true
        ---
        apiVersion: rbac.authorization.k8s.io/v1
        kind: Role
        metadata:
          name: capi-provider-role
          namespace: <namespace_example>
        rules:
        - apiGroups:
          - agent-install.openshift.io
          resources:
          - agents
          verbs:
          - '*'

        where:

        <namespace_example>

        Is the your namespace.

        <password>

        Is the password for your secret.

        <username>

        Is the user name for your secret.

        <bmc_address>

        Is the BMC address for the BareMetalHost object.

        When you apply this YAML file, the following objects are created:

        • Secrets with credentials for the Baseboard Management Controller (BMCs)

        • The BareMetalHost objects

        • A role for the HyperShift Operator to be able to manage the agents

        Notice how the InfraEnv resource is referenced in the BareMetalHost objects by using the infraenvs.agent-install.openshift.io: hosted custom label. This ensures that the nodes are booted with the ISO generated.

      2. Apply the changes to the YAML file by entering the following command:

        $ oc apply -f <bare_metal_host_config>.yaml

        Replace <bare_metal_host_config> with the name of your file.

  7. Enter the following command, and then wait a few minutes for the BareMetalHost objects to move to the Provisioning state:

    $ oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig -n <namespace_example> get bmh
    Example output
    NAME             STATE          CONSUMER   ONLINE   ERROR   AGE
    hosted-worker0   provisioning              true             106s
    hosted-worker1   provisioning              true             106s
    hosted-worker2   provisioning              true             106s
  8. Enter the following command to verify that nodes are booting and showing up as agents. This process can take a few minutes, and you might need to enter the command more than once.

    $ oc --kubeconfig ~/<directory_example>/mgmt-kubeconfig -n <namespace_example> get agent
    Example output
    NAME                                   CLUSTER   APPROVED   ROLE          STAGE
    aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0201             true       auto-assign
    aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0202             true       auto-assign
    aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0203             true       auto-assign

Creating an InfraEnv resource by using the console

To create an InfraEnv resource by using the console, complete the following steps.

Procedure
  1. Open the OKD web console and log in by entering your administrator credentials. For instructions to open the console, see "Accessing the web console".

  2. In the console header, ensure that All Clusters is selected.

  3. Click Infrastructure → Host inventory → Create infrastructure environment.

  4. After you create the InfraEnv resource, add bare-metal hosts from within the InfraEnv view by clicking Add hosts and selecting from the available options.

Additional resources

Creating a hosted cluster on bare metal

You can create a hosted cluster or import one. When the Assisted Installer is enabled as an add-on to multicluster engine Operator and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.

Creating a hosted cluster by using the CLI

To create a hosted cluster by using the command-line interface (CLI), complete the following steps.

Prerequisites
  • Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for the multicluster engine Operator to manage it.

  • Do not use clusters as a hosted cluster name.

  • A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster.

  • Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).

  • By default when you use the hcp create cluster agent command, the hosted cluster is created with node ports. However, the preferred publishing strategy for hosted clusters on bare metal is to expose services through a load balancer. If you create a hosted cluster by using the web console or by using Red Hat Advanced Cluster Management, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the servicePublishingStrategy information in the HostedCluster custom resource. For more information, see step 4 in this procedure.

  • Ensure that you meet the requirements described in "Preparing to deploy hosted control planes on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:

    $ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
    $ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
    $ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
  • Ensure that you have added bare-metal nodes to a hardware inventory.

Procedure
  1. Create a namespace by entering the following command:

    $ oc create ns <hosted_cluster_namespace>

    Replace <hosted_cluster_namespace> with an identifier for your hosted cluster namespace. Typically, the namespace is created by the HyperShift Operator, but during the hosted cluster creation process on bare metal, a Cluster API provider role is generated that needs the namespace to already exist.

  2. Create the configuration file for your hosted cluster by entering the following command:

    $ hcp create cluster agent \
      --name=<hosted_cluster_name> \(1)
      --pull-secret=<path_to_pull_secret> \(2)
      --agent-namespace=<hosted_control_plane_namespace> \(3)
      --base-domain=<base_domain> \(4)
      --api-server-address=api.<hosted_cluster_name>.<base_domain> \(5)
      --etcd-storage-class=<etcd_storage_class> \(6)
      --ssh-key=<path_to_ssh_key> \(7)
      --namespace=<hosted_cluster_namespace> \(8)
      --control-plane-availability-policy=HighlyAvailable \(9)
      --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image>-multi \(10)
      --node-pool-replicas=<node_pool_replica_count> \(11)
      --render \
      --render-sensitive \
      --ssh-key <home_directory>/<path_to_ssh_key>/<ssh_key> > hosted-cluster-config.yaml (12)
    1 Specify the name of your hosted cluster.
    2 Specify the path to your pull secret, for example, /user/name/pullsecret.
    3 Specify your hosted control plane namespace. To ensure that agents are available in this namespace, enter the oc get agent -n <hosted_control_plane_namespace> command.
    4 Specify your base domain, for example, krnl.es.
    5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster.
    6 Specify the etcd storage class name, for example, lvm-storageclass.
    7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub.
    8 Specify your hosted cluster namespace.
    9 Specify the availability policy for the hosted control plane components. Supported options are SingleReplica and HighlyAvailable. The default value is HighlyAvailable.
    10 Specify the supported OKD version that you want to use, for example, 4.17.0-multi. If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OKD release image digest, see "Extracting the release image digest".
    11 Specify the node pool replica count, for example, 3. You must specify the replica count as 0 or greater to create the same number of replicas. Otherwise, no node pools are created.
    12 After the --ssh-key flag, specify the path to the SSH key; for example, user/.ssh/id_rsa.
  3. Configure the service publishing strategy. By default, hosted clusters use the NodePort service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.

    • If you are using the default NodePort strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal".

    • For production environments, use the LoadBalancer strategy because it provides certificate handling and automatic DNS resolution. To change the service publishing strategy LoadBalancer, in your hosted cluster configuration file, edit the service publishing strategy details:

      ...
      spec:
        services:
        - service: APIServer
          servicePublishingStrategy:
            type: LoadBalancer (1)
        - service: Ignition
          servicePublishingStrategy:
            type: route
        - service: Konnectivity
          servicePublishingStrategy:
            type: route
        - service: OAuthServer
          servicePublishingStrategy:
            type: route
        - service: OIDC
          servicePublishingStrategy:
            type: route
        sshKey:
          name: <ssh_key>
      ...
      1 Specify LoadBalancer as the API Server type. For all other services, specify route as the type.
  4. Apply the changes to the hosted cluster configuration file by entering the following command:

    $ oc apply -f hosted_cluster_config.yaml
  5. Monitor the creation of the hosted cluster, node pools, and pods by entering the following commands:

    $ oc get hostedcluster \
      <hosted_cluster_namespace> -n \
      <hosted_cluster_namespace> -o \
      jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
    $ oc get nodepool \
      <hosted_cluster_namespace> -n \
      <hosted_cluster_namespace> -o \
      jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
    $ oc get pods -n <hosted_cluster_namespace>
  6. Confirm that the hosted cluster is ready. The cluster is ready when its status is Available: True, the node pool status shows AllMachinesReady: True, and all cluster Operators are healthy.

  7. Install MetalLB in the hosted cluster:

    1. Extract the kubeconfig file from the hosted cluster and set the environment variable for hosted cluster access by entering the following commands:

      $ oc get secret \
        <hosted_cluster_namespace>-admin-kubeconfig \
        -n <hosted_cluster_namespace> \
        -o jsonpath='{.data.kubeconfig}' \
        | base64 -d > \
        kubeconfig-<hosted_cluster_namespace>.yaml
      $ export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
    2. Install the MetalLB Operator by creating the install-metallb-operator.yaml file:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: metallb-system
      ---
      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: metallb-operator
        namespace: metallb-system
      ---
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: metallb-operator
        namespace: metallb-system
      spec:
        channel: "stable"
        name: metallb-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace
        installPlanApproval: Automatic
    3. Apply the file by entering the following command:

      $ oc apply -f install-metallb-operator.yaml
    4. Configure the MetalLB IP address pool by creating the deploy-metallb-ipaddresspool.yaml file:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        name: metallb
        namespace: metallb-system
      spec:
        autoAssign: true
        addresses:
        - 10.11.176.71-10.11.176.75
      ---
      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: l2advertisement
        namespace: metallb-system
      spec:
        ipAddressPools:
        - metallb
    5. Apply the configuration by entering the following command:

      $ oc apply -f deploy-metallb-ipaddresspool.yaml
    6. Verify that MetalLB is installed by checking the Operator status, the IP address pool, and the L2Advertisement. Enter the following commands:

      $ oc get pods -n metallb-system
      $ oc get ipaddresspool -n metallb-system
      $ oc get l2advertisement -n metallb-system
  8. Configure the load balancer for ingress:

    1. Create the ingress-loadbalancer.yaml file:

      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          metallb.universe.tf/address-pool: metallb
        name: metallb-ingress
        namespace: openshift-ingress
      spec:
        ports:
          - name: http
            protocol: TCP
            port: 80
            targetPort: 80
          - name: https
            protocol: TCP
            port: 443
            targetPort: 443
        selector:
          ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
        type: LoadBalancer
    2. Apply the configuration by entering the following command:

      $ oc apply -f ingress-loadbalancer.yaml
    3. Verify that the load balancer service works as expected by entering the following command:

      $ oc get svc metallb-ingress -n openshift-ingress
      Example output
      NAME              TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
      metallb-ingress   LoadBalancer   172.31.127.129   10.11.176.71   80:30961/TCP,443:32090/TCP   16h
  9. Configure the DNS to work with the load balancer:

    1. Configure the DNS for the apps domain by pointing the *.apps.<hosted_cluster_namespace>.<base_domain> wildcard DNS record to the load balancer IP address.

    2. Verify the DNS resolution by entering the following command:

      $ nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
      Example output
      Server:         10.11.176.1
      Address:        10.11.176.1#53
      
      Name:   console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com
      Address: 10.11.176.71
Verification
  1. Check the cluster Operators by entering the following command:

    $ oc get clusteroperators

    Ensure that all Operators show AVAILABLE: True, PROGRESSING: False, and DEGRADED: False.

  2. Check the nodes by entering the following command:

    $ oc get nodes

    Ensure that the status of all nodes is READY.

  3. Test access to the console by entering the following URL in a web browser:

    https://console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain>

Creating a hosted cluster on bare metal by using the console

To create a hosted cluster by using the console, complete the following steps.

Procedure
  1. Open the OKD web console and log in by entering your administrator credentials. For instructions to open the console, see "Accessing the web console".

  2. In the console header, ensure that All Clusters is selected.

  3. Click Infrastructure → Clusters.

  4. Click Create cluster → Host inventory → Hosted control plane.

    The Create cluster page is displayed.

  5. On the Create cluster page, follow the prompts to enter details about the cluster, node pools, networking, and automation.

    As you enter details about the cluster, you might find the following tips useful:

    • If you want to use predefined values to automatically populate fields in the console, you can create a host inventory credential. For more information, see "Creating a credential for an on-premises environment".

    • On the Cluster details page, the pull secret is your OKD pull secret that you use to access OKD resources. If you selected a host inventory credential, the pull secret is automatically populated.

    • On the Node pools page, the namespace contains the hosts for the node pool. If you created a host inventory by using the console, the console creates a dedicated namespace.

    • On the Networking page, you select an API server publishing strategy. The API server for the hosted cluster can be exposed either by using an existing load balancer or as a service of the NodePort type. A DNS entry must exist for the api.<hosted_cluster_name>.<base_domain> setting that points to the destination where the API server can be reached. This entry can be a record that points to one of the nodes in the management cluster or a record that points to a load balancer that redirects incoming traffic to the Ingress pods.

  6. Review your entries and click Create.

    The Hosted cluster view is displayed.

  7. Monitor the deployment of the hosted cluster in the Hosted cluster view.

  8. If you do not see information about the hosted cluster, ensure that All Clusters is selected, then click the cluster name.

  9. Wait until the control plane components are ready. This process can take a few minutes.

  10. To view the node pool status, scroll to the NodePool section. The process to install the nodes takes about 10 minutes. You can also click Nodes to confirm whether the nodes joined the hosted cluster.

Next steps

Creating a hosted cluster on bare metal by using a mirror registry

You can use a mirror registry to create a hosted cluster on bare metal by specifying the --image-content-sources flag in the hcp create cluster command.

Procedure
  1. Create a YAML file to define Image Content Source Policies (ICSP). See the following example:

    - mirrors:
      - brew.registry.redhat.io
      source: registry.redhat.io
    - mirrors:
      - brew.registry.redhat.io
      source: registry.stage.redhat.io
    - mirrors:
      - brew.registry.redhat.io
      source: registry-proxy.engineering.redhat.com
  2. Save the file as icsp.yaml. This file contains your mirror registries.

  3. To create a hosted cluster by using your mirror registries, run the following command:

    $ hcp create cluster agent \
        --name=<hosted_cluster_name> \(1)
        --pull-secret=<path_to_pull_secret> \(2)
        --agent-namespace=<hosted_control_plane_namespace> \(3)
        --base-domain=<basedomain> \(4)
        --api-server-address=api.<hosted_cluster_name>.<basedomain> \(5)
        --image-content-sources icsp.yaml  \(6)
        --ssh-key  <path_to_ssh_key> \(7)
        --namespace <hosted_cluster_namespace> \(8)
        --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image> (9)
    1 Specify the name of your hosted cluster, for instance, example.
    2 Specify the path to your pull secret, for example, /user/name/pullsecret.
    3 Specify your hosted control plane namespace, for example, clusters-example. Ensure that agents are available in this namespace by using the oc get agent -n <hosted-control-plane-namespace> command.
    4 Specify your base domain, for example, krnl.es.
    5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster.
    6 Specify the icsp.yaml file that defines ICSP and your mirror registries.
    7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub.
    8 Specify your hosted cluster namespace.
    9 Specify the supported OKD version that you want to use, for example, 4.17.0-multi. If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OKD release image digest, see "Extracting the OKD release image digest".
Next steps

Verifying hosted cluster creation

After the deployment process is complete, you can verify that the hosted cluster was created successfully. Follow these steps a few minutes after you create the hosted cluster.

Procedure
  1. Obtain the kubeconfig for your new hosted cluster by entering the extract command:

    $ oc extract -n <hosted-control-plane-namespace> secret/admin-kubeconfig \
      --to=- > kubeconfig-<hosted-cluster-name>
  2. Use the kubeconfig to view the cluster Operators of the hosted cluster. Enter the following command:

    $ oc get co --kubeconfig=kubeconfig-<hosted-cluster-name>
    Example output
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    console                                    4.10.26   True        False         False      2m38s
    dns                                        4.10.26   True        False         False      2m52s
    image-registry                             4.10.26   True        False         False      2m8s
    ingress                                    4.10.26   True        False         False      22m
  3. You can also view the running pods on your hosted cluster by entering the following command:

    $ oc get pods -A --kubeconfig=kubeconfig-<hosted-cluster-name>
    Example output
    NAMESPACE                                          NAME                                                      READY   STATUS             RESTARTS        AGE
    kube-system                                        konnectivity-agent-khlqv                                  0/1     Running            0               3m52s
    openshift-cluster-node-tuning-operator             tuned-dhw5p                                               1/1     Running            0               109s
    openshift-cluster-storage-operator                 cluster-storage-operator-5f784969f5-vwzgz                 1/1     Running            1 (113s ago)    20m
    openshift-cluster-storage-operator                 csi-snapshot-controller-6b7687b7d9-7nrfw                  1/1     Running            0               3m8s
    openshift-console                                  console-5cbf6c7969-6gk6z                                  1/1     Running            0               119s
    openshift-console                                  downloads-7bcd756565-6wj5j                                1/1     Running            0               4m3s
    openshift-dns-operator                             dns-operator-77d755cd8c-xjfbn                             2/2     Running            0               21m
    openshift-dns                                      dns-default-kfqnh                                         2/2     Running            0               113s

Configuring a custom API server certificate in a hosted cluster

To configure a custom certificate for the API server, specify the certificate details in the spec.configuration.apiServer section of your HostedCluster configuration.

You can configure a custom certificate during either day-1 or day-2 operations. However, because the service publishing strategy is immutable after you set it during hosted cluster creation, you must know what the hostname is for the Kubernetes API server that you plan to configure.

Prerequisites
  • You created a Kubernetes secret that contains your custom certificate in the management cluster. The secret contains the following keys:

    • tls.crt: The certificate

    • tls.key: The private key

  • If your HostedCluster configuration includes a service publishing strategy that uses a load balancer, ensure that the Subject Alternative Names (SANs) of the certificate do not conflict with the internal API endpoint (api-int). The internal API endpoint is automatically created and managed by your platform. If you use the same hostname in both the custom certificate and the internal API endpoint, routing conflicts can occur. The only exception to this rule is when you use AWS as the provider with either Private or PublicAndPrivate configurations. In those cases, the SAN conflict is managed by the platform.

  • The certificate must be valid for the external API endpoint.

  • The validity period of the certificate aligns with your cluster’s expected life cycle.

Procedure
  1. Create a secret with your custom certificate by entering the following command:

    $ oc create secret tls sample-hosted-kas-custom-cert \
      --cert=path/to/cert.crt \
      --key=path/to/key.key \
      -n <hosted_cluster_namespace>
  2. Update your HostedCluster configuration with the custom certificate details, as shown in the following example:

    spec:
      configuration:
        apiServer:
          servingCerts:
            namedCertificates:
            - names: (1)
              - api-custom-cert-sample-hosted.sample-hosted.example.com
              servingCertificate: (2)
                name: sample-hosted-kas-custom-cert
    1 The list of DNS names that the certificate is valid for.
    2 The name of the secret that contains the custom certificate.
  3. Apply the changes to your HostedCluster configuration by entering the following command:

    $ oc apply -f <hosted_cluster_config>.yaml
Verification
  • Check the API server pods to ensure that the new certificate is mounted.

  • Test the connection to the API server by using the custom domain name.

  • Verify the certificate details in your browser or by using tools such as openssl.