This is a cache of https://docs.okd.io/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-ibm-power.html. It is a snapshot of the page at 2025-09-24T01:18:11.956+0000.
Deploying hosted control planes on IBM Power - Deploying hosted control planes | Hosted control planes | OKD 4.17
×

You can deploy hosted control planes by configuring a cluster to function as a hosting cluster. This configuration provides an efficient and scalable solution for managing many clusters. The hosting cluster is an OKD cluster that hosts control planes. The hosting cluster is also known as the management cluster.

The management cluster is not the managed cluster. A managed cluster is a cluster that the hub cluster manages.

The multicluster engine Operator supports only the default local-cluster, which is a managed hub cluster, and the hub cluster as the hosting cluster.

To provision hosted control planes on bare-metal infrastructure, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add compute nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service".

You must start each IBM Power host with a Discovery image that the central infrastructure management provides. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.

When you create a hosted cluster with the Agent platform, HyperShift installs the Agent Cluster API provider in the hosted control plane namespace.

Prerequisites to configure hosted control planes on IBM Power

  • The multicluster engine for Kubernetes Operator version 2.7 and later installed on an OKD cluster. The multicluster engine Operator is automatically installed when you install Red Hat Advanced Cluster Management (RHACM). You can also install the multicluster engine Operator without RHACM as an Operator from the OKD OperatorHub.

  • The multicluster engine Operator must have at least one managed OKD cluster. The local-cluster managed hub cluster is automatically imported in the multicluster engine Operator version 2.7 and later. For more information about local-cluster, see Advanced configuration in the RHACM documentation. You can check the status of your hub cluster by running the following command:

    $ oc get managedclusters local-cluster
  • You need a hosting cluster with at least 3 compute nodes to run the HyperShift Operator.

  • You need to enable the central infrastructure management service. For more information, see "Enabling the central infrastructure management service".

  • You need to install the hosted control planes command-line interface. For more information, see "Installing the hosted control plane command-line interface".

The hosted control planes feature is enabled by default. If you disabled the feature and want to manually enable the feature, see "Manually enabling the hosted control planes feature". If you need to disable the feature, see "Disabling the hosted control planes feature".

IBM Power infrastructure requirements

The Agent platform does not create any infrastructure, but requires the following resources for infrastructure:

  • Agents: An Agent represents a host that boots with a Discovery image and that you can provision as an OKD node.

  • DNS: The API and Ingress endpoints must be routable.

DNS configuration for hosted control planes on IBM Power

Clients outside the network can access the API server for the hosted cluster. A DNS entry must exist for the api.<hosted_cluster_name>.<basedomain> entry that points to the destination where the API server is reachable.

The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that runs the hosted control plane.

The entry can also point to a deployed load balancer to redirect incoming traffic to the ingress pods.

See the following example of a DNS configuration:

$ cat /var/named/<example.krnl.es.zone>
Example output
$ TTL 900
@ IN  SOA bastion.example.krnl.es.com. hostmaster.example.krnl.es.com. (
      2019062002
      1D 1H 1W 3H )
  IN NS bastion.example.krnl.es.com.
;
;
api                   IN A 1xx.2x.2xx.1xx (1)
api-int               IN A 1xx.2x.2xx.1xx
;
;
*.apps.<hosted_cluster_name>.<basedomain>           IN A 1xx.2x.2xx.1xx
;
;EOF
1 The record refers to the IP address of the API load balancer that handles ingress and egress traffic for hosted control planes.

For IBM Power, add IP addresses that correspond to the IP address of the agent.

Example configuration
compute-0              IN A 1xx.2x.2xx.1yy
compute-1              IN A 1xx.2x.2xx.1yy

Creating a hosted cluster by using the CLI

On bare-metal infrastructure, you can create or import a hosted cluster. After you enable the Assisted Installer as an add-on to multicluster engine Operator and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace. The Agent Cluster API provider connects a management cluster that hosts the control plane and a hosted cluster that consists of only the compute nodes.

Prerequisites
  • Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster. Otherwise, the multicluster engine Operator cannot manage the hosted cluster.

  • Do not use the word clusters as a hosted cluster name.

  • You cannot create a hosted cluster in the namespace of a multicluster engine Operator managed cluster.

  • For best security and management practices, create a hosted cluster separate from other hosted clusters.

  • Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).

  • By default when you use the hcp create cluster agent command, the command creates a hosted cluster with configured node ports. The preferred publishing strategy for hosted clusters on bare metal exposes services through a load balancer. If you create a hosted cluster by using the web console or by using Red Hat Advanced Cluster Management, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the servicePublishingStrategy information in the HostedCluster custom resource.

  • Ensure that you meet the requirements described in "Requirements for hosted control planes on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:

    $ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
    $ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
    $ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
  • Ensure that you have added bare-metal nodes to a hardware inventory.

Procedure
  1. Create a namespace by entering the following command:

    $ oc create ns <hosted_cluster_namespace>

    Replace <hosted_cluster_namespace> with an identifier for your hosted cluster namespace. The HyperShift Operator creates the namespace. During the hosted cluster creation process on bare-metal infrastructure, a generated Cluster API provider role requires that the namespace already exists.

  2. Create the configuration file for your hosted cluster by entering the following command:

    $ hcp create cluster agent \
      --name=<hosted_cluster_name> \(1)
      --pull-secret=<path_to_pull_secret> \(2)
      --agent-namespace=<hosted_control_plane_namespace> \(3)
      --base-domain=<base_domain> \(4)
      --api-server-address=api.<hosted_cluster_name>.<base_domain> \(5)
      --etcd-storage-class=<etcd_storage_class> \(6)
      --ssh-key=<path_to_ssh_key> \(7)
      --namespace=<hosted_cluster_namespace> \(8)
      --control-plane-availability-policy=HighlyAvailable \(9)
      --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image>-multi \(10)
      --node-pool-replicas=<node_pool_replica_count> \(11)
      --render \
      --render-sensitive \
      --ssh-key <home_directory>/<path_to_ssh_key>/<ssh_key> > hosted-cluster-config.yaml (12)
    1 Specify the name of your hosted cluster, such as example.
    2 Specify the path to your pull secret, such as /user/name/pullsecret.
    3 Specify your hosted control plane namespace, such as clusters-example. Ensure that agents are available in this namespace by using the oc get agent -n <hosted_control_plane_namespace> command.
    4 Specify your base domain, such as krnl.es.
    5 The --api-server-address flag defines the IP address that gets used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster.
    6 Specify the etcd storage class name, such as lvm-storageclass.
    7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub.
    8 Specify your hosted cluster namespace.
    9 Specify the availability policy for the hosted control plane components. Supported options are SingleReplica and HighlyAvailable. The default value is HighlyAvailable.
    10 Specify the supported OKD version that you want to use, such as 4.19.0-multi. If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OKD release image digest, see Extracting the OKD release image digest.
    11 Specify the node pool replica count, such as 3. You must specify the replica count as 0 or greater to create the same number of replicas. Otherwise, you do not create node pools.
    12 After the --ssh-key flag, specify the path to the SSH key, such as user/.ssh/id_rsa.
  3. Configure the service publishing strategy. By default, hosted clusters use the NodePort service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.

    • If you are using the default NodePort strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal".

    • For production environments, use the LoadBalancer strategy because this strategy provides certificate handling and automatic DNS resolution. The following example demonstrates changing the service publishing LoadBalancer strategy in your hosted cluster configuration file:

      # ...
      spec:
        services:
        - service: APIServer
          servicePublishingStrategy:
            type: LoadBalancer (1)
        - service: Ignition
          servicePublishingStrategy:
            type: Route
        - service: Konnectivity
          servicePublishingStrategy:
            type: Route
        - service: OAuthServer
          servicePublishingStrategy:
            type: Route
        - service: OIDC
          servicePublishingStrategy:
            type: Route
        sshKey:
          name: <ssh_key>
      # ...
      1 Specify LoadBalancer as the API Server type. For all other services, specify Route as the type.
  4. Apply the changes to the hosted cluster configuration file by entering the following command:

    $ oc apply -f hosted_cluster_config.yaml
  5. Check for the creation of the hosted cluster, node pools, and pods by entering the following commands:

    $ oc get hostedcluster \
      <hosted_cluster_namespace> -n \
      <hosted_cluster_namespace> -o \
      jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
    $ oc get nodepool \
      <hosted_cluster_namespace> -n \
      <hosted_cluster_namespace> -o \
      jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
    $ oc get pods -n <hosted_cluster_namespace>
  6. Confirm that the hosted cluster is ready. The status of Available: True indicates the readiness of the cluster and the node pool status shows AllMachinesReady: True. These statuses indicate the healthiness of all cluster Operators.

  7. Install MetalLB in the hosted cluster:

    1. Extract the kubeconfig file from the hosted cluster and set the environment variable for hosted cluster access by entering the following commands:

      $ oc get secret \
        <hosted_cluster_namespace>-admin-kubeconfig \
        -n <hosted_cluster_namespace> \
        -o jsonpath='{.data.kubeconfig}' \
        | base64 -d > \
        kubeconfig-<hosted_cluster_namespace>.yaml
      $ export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
    2. Install the MetalLB Operator by creating the install-metallb-operator.yaml file:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: metallb-system
      ---
      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: metallb-operator
        namespace: metallb-system
      ---
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: metallb-operator
        namespace: metallb-system
      spec:
        channel: "stable"
        name: metallb-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace
        installPlanApproval: Automatic
      # ...
    3. Apply the file by entering the following command:

      $ oc apply -f install-metallb-operator.yaml
    4. Configure the MetalLB IP address pool by creating the deploy-metallb-ipaddresspool.yaml file:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        name: metallb
        namespace: metallb-system
      spec:
        autoAssign: true
        addresses:
        - 10.11.176.71-10.11.176.75
      ---
      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: l2advertisement
        namespace: metallb-system
      spec:
        ipAddressPools:
        - metallb
      # ...
    5. Apply the configuration by entering the following command:

      $ oc apply -f deploy-metallb-ipaddresspool.yaml
    6. Verify the installation of MetalLB by checking the Operator status, the IP address pool, and the L2Advertisement resource by entering the following commands:

      $ oc get pods -n metallb-system
      $ oc get ipaddresspool -n metallb-system
      $ oc get l2advertisement -n metallb-system
  8. Configure the load balancer for ingress:

    1. Create the ingress-loadbalancer.yaml file:

      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          metallb.universe.tf/address-pool: metallb
        name: metallb-ingress
        namespace: openshift-ingress
      spec:
        ports:
          - name: http
            protocol: TCP
            port: 80
            targetPort: 80
          - name: https
            protocol: TCP
            port: 443
            targetPort: 443
        selector:
          ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
        type: LoadBalancer
      # ...
    2. Apply the configuration by entering the following command:

      $ oc apply -f ingress-loadbalancer.yaml
    3. Verify that the load balancer service works as expected by entering the following command:

      $ oc get svc metallb-ingress -n openshift-ingress
      Example output
      NAME              TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
      metallb-ingress   LoadBalancer   172.31.127.129   10.11.176.71   80:30961/TCP,443:32090/TCP   16h
  9. Configure the DNS to work with the load balancer:

    1. Configure the DNS for the apps domain by pointing the *.apps.<hosted_cluster_namespace>.<base_domain> wildcard DNS record to the load balancer IP address.

    2. Verify the DNS resolution by entering the following command:

      $ nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
      Example output
      Server:         10.11.176.1
      Address:        10.11.176.1#53
      
      Name:   console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com
      Address: 10.11.176.71
Verification
  1. Check the cluster Operators by entering the following command:

    $ oc get clusteroperators

    Ensure that all Operators show AVAILABLE: True, PROGRESSING: False, and DEGRADED: False.

  2. Check the nodes by entering the following command:

    $ oc get nodes

    Ensure that each node has the READY status.

  3. Test access to the console by entering the following URL in a web browser:

    https://console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain>