This is a cache of https://docs.okd.io/4.19/hosted_control_planes/hcp-deploy/hcp-deploy-ibmz.html. It is a snapshot of the page at 2025-08-30T20:54:38.639+0000.
Deploying hosted control planes on IBM Z - Deploying hosted control planes | Hosted control planes | OKD 4.19
×

You can deploy hosted control planes by configuring a cluster to function as a management cluster. The management cluster is the OKD cluster where the control planes are hosted. The management cluster is also known as the hosting cluster.

The management cluster is not the managed cluster. A managed cluster is a cluster that the hub cluster manages.

You can convert a managed cluster to a management cluster by using the hypershift add-on to deploy the HyperShift Operator on that cluster. Then, you can start to create the hosted cluster.

The multicluster engine Operator supports only the default local-cluster, which is a hub cluster that is managed, and the hub cluster as the management cluster.

To provision hosted control planes on bare metal, you can use the Agent platform. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For more information, see "Enabling the central infrastructure management service".

Each IBM Z system host must be started with the PXE images provided by the central infrastructure management. After each host starts, it runs an Agent process to discover the details of the host and completes the installation. An Agent custom resource represents each host.

When you create a hosted cluster with the Agent platform, HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.

Prerequisites to configure hosted control planes on IBM Z

  • The multicluster engine for Kubernetes Operator version 2.5 or later must be installed on an OKD cluster. You can install multicluster engine Operator as an Operator from the OKD OperatorHub.

  • The multicluster engine Operator must have at least one managed OKD cluster. The local-cluster is automatically imported in multicluster engine Operator 2.5 and later. For more information about the local-cluster, see Advanced configuration in the Red Hat Advanced Cluster Management documentation. You can check the status of your hub cluster by running the following command:

    $ oc get managedclusters local-cluster
  • You need a hosting cluster with at least three worker nodes to run the HyperShift Operator.

  • You need to enable the central infrastructure management service. For more information, see Enabling the central infrastructure management service.

  • You need to install the hosted control plane command-line interface. For more information, see Installing the hosted control plane command-line interface.

IBM Z infrastructure requirements

The Agent platform does not create any infrastructure, but requires the following resources for infrastructure:

  • Agents: An Agent represents a host that is booted with a discovery image, or PXE image and is ready to be provisioned as an OKD node.

  • DNS: The API and Ingress endpoints must be routable.

The hosted control planes feature is enabled by default. If you disabled the feature and want to manually enable it, or if you need to disable the feature, see Enabling or disabling the hosted control planes feature.

DNS configuration for hosted control planes on IBM Z

The API server for the hosted cluster is exposed as a NodePort service. A DNS entry must exist for the api.<hosted_cluster_name>.<base_domain> that points to the destination where the API server is reachable.

The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane.

The entry can also point to a load balancer deployed to redirect incoming traffic to the Ingress pods.

See the following example of a DNS configuration:

$ cat /var/named/<example.krnl.es.zone>
Example output
$ TTL 900
@ IN  SOA bastion.example.krnl.es.com. hostmaster.example.krnl.es.com. (
      2019062002
      1D 1H 1W 3H )
  IN NS bastion.example.krnl.es.com.
;
;
api                   IN A 1xx.2x.2xx.1xx (1)
api-int               IN A 1xx.2x.2xx.1xx
;
;
*.apps        IN A 1xx.2x.2xx.1xx
;
;EOF
1 The record refers to the IP address of the API load balancer that handles ingress and egress traffic for hosted control planes.

For IBM z/VM, add IP addresses that correspond to the IP address of the agent.

compute-0              IN A 1xx.2x.2xx.1yy
compute-1              IN A 1xx.2x.2xx.1yy

Defining a custom DNS name

As a cluster administrator, you can create a hosted cluster with an external API DNS name that is different from the internal endpoint that is used for node bootstraps and control plane communication. You might want to define a different DNS name for the following reasons:

  • To replace the user-facing TLS certificate with one from a public CA without breaking the control plane functions that are bound to the internal root CA

  • To support split-horizon DNS and NAT scenarios

  • To ensure a similar experience to standalone control planes, where you can use functions, such as the "Show Login Command" function, with the correct kubeconfig and DNS configuration

You can define a DNS name either during your initial setup or during day-2 operations, by entering a domain name in the kubeAPIServerDNSName field of a HostedCluster object.

Prerequisites
  • You have a valid TLS certificate that covers the DNS name that you will set in the kubeAPIServerDNSName field.

  • Your DNS name is a resolvable URI that is reachable and pointing to the right address.

Procedure
  • In the specification for the HostedCluster object, add the kubeAPIServerDNSName field and the address for the domain and specify which certificate to use, as shown in the following example:

    #...
    spec:
      configuration:
        apiServer:
          servingCerts:
            namedCertificates:
            - names:
              - xxx.example.com
              - yyy.example.com
              servingCertificate:
                name: <my_serving_certificate>
      kubeAPIServerDNSName: <your_custom_address> (1)
    1 The value for the kubeAPIServerDNSName field must be a valid, addressable domain.

After you define the kubeAPIServerDNSName field and specify the certificate, the Control Plane Operator controllers create a kubeconfig file named custom-admin-kubeconfig, which is stored in the HostedControlPlane namespace. The certificates are generated from the root CA, and the HostedControlPlane namespace manages their expiration and renewal.

The Control Plane Operator reports a new kubeconfig file named CustomKubeconfig in the HostedControlPlane namespace. That file uses the new server that is defined in the kubeAPIServerDNSName field.

The custom kubeconfig file is referenced in the HostedCluster object in the status field as CustomKubeconfig. The CustomKubeConfig field is optional, and can be added only if the kubeAPIServerDNSName field is not empty. When the CustomKubeConfig field is set, it triggers the generation of a secret named <hosted_cluster_name>-custom-admin-kubeconfig in the HostedCluster namespace. You can use the secret to access the HostedCluster API server. If you remove the CustomKubeConfig field during day-2 operations, all related secrets and status references are deleted.

This process does not directly affect the data plane, so no rollouts are expected to occur. The HostedControlPlane namespace receives the changes from the HyperShift Operator and deletes the corresponding fields.

If you remove the kubeAPIServerDNSName field from the specification for the HostedCluster object, all newly generated secrets and the CustomKubeconfig reference are removed from the cluster and from the status field.

Creating a hosted cluster on bare metal

You can create a hosted cluster or import one. When the Assisted Installer is enabled as an add-on to multicluster engine Operator and you create a hosted cluster with the Agent platform, the HyperShift Operator installs the Agent Cluster API provider in the hosted control plane namespace.

Creating a hosted cluster by using the CLI

To create a hosted cluster by using the command-line interface (CLI), complete the following steps.

Prerequisites
  • Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for the multicluster engine Operator to manage it.

  • Do not use clusters as a hosted cluster name.

  • A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster.

  • Verify that you have a default storage class configured for your cluster. Otherwise, you might see pending persistent volume claims (PVCs).

  • By default when you use the hcp create cluster agent command, the hosted cluster is created with node ports. However, the preferred publishing strategy for hosted clusters on bare metal is to expose services through a load balancer. If you create a hosted cluster by using the web console or by using Red Hat Advanced Cluster Management, to set a publishing strategy for a service besides the Kubernetes API server, you must manually specify the servicePublishingStrategy information in the HostedCluster custom resource. For more information, see step 4 in this procedure.

  • Ensure that you meet the requirements described in "Preparing to deploy hosted control planes on bare metal", which includes requirements related to infrastructure, firewalls, ports, and services. For example, those requirements describe how to add the appropriate zone labels to the bare-metal hosts in your management cluster, as shown in the following example commands:

    $ oc label node [compute-node-1] topology.kubernetes.io/zone=zone1
    $ oc label node [compute-node-2] topology.kubernetes.io/zone=zone2
    $ oc label node [compute-node-3] topology.kubernetes.io/zone=zone3
  • Ensure that you have added bare-metal nodes to a hardware inventory.

Procedure
  1. Create a namespace by entering the following command:

    $ oc create ns <hosted_cluster_namespace>

    Replace <hosted_cluster_namespace> with an identifier for your hosted cluster namespace. Typically, the namespace is created by the HyperShift Operator, but during the hosted cluster creation process on bare metal, a Cluster API provider role is generated that needs the namespace to already exist.

  2. Create the configuration file for your hosted cluster by entering the following command:

    $ hcp create cluster agent \
      --name=<hosted_cluster_name> \(1)
      --pull-secret=<path_to_pull_secret> \(2)
      --agent-namespace=<hosted_control_plane_namespace> \(3)
      --base-domain=<base_domain> \(4)
      --api-server-address=api.<hosted_cluster_name>.<base_domain> \(5)
      --etcd-storage-class=<etcd_storage_class> \(6)
      --ssh-key=<path_to_ssh_key> \(7)
      --namespace=<hosted_cluster_namespace> \(8)
      --control-plane-availability-policy=HighlyAvailable \(9)
      --release-image=quay.io/openshift-release-dev/ocp-release:<ocp_release_image>-multi \(10)
      --node-pool-replicas=<node_pool_replica_count> \(11)
      --render \
      --render-sensitive \
      --ssh-key <home_directory>/<path_to_ssh_key>/<ssh_key> > hosted-cluster-config.yaml (12)
    1 Specify the name of your hosted cluster.
    2 Specify the path to your pull secret, for example, /user/name/pullsecret.
    3 Specify your hosted control plane namespace. To ensure that agents are available in this namespace, enter the oc get agent -n <hosted_control_plane_namespace> command.
    4 Specify your base domain, for example, krnl.es.
    5 The --api-server-address flag defines the IP address that is used for the Kubernetes API communication in the hosted cluster. If you do not set the --api-server-address flag, you must log in to connect to the management cluster.
    6 Specify the etcd storage class name, for example, lvm-storageclass.
    7 Specify the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub.
    8 Specify your hosted cluster namespace.
    9 Specify the availability policy for the hosted control plane components. Supported options are SingleReplica and HighlyAvailable. The default value is HighlyAvailable.
    10 Specify the supported OKD version that you want to use, for example, 4.19.0-multi. If you are using a disconnected environment, replace <ocp_release_image> with the digest image. To extract the OKD release image digest, see "Extracting the release image digest".
    11 Specify the node pool replica count, for example, 3. You must specify the replica count as 0 or greater to create the same number of replicas. Otherwise, no node pools are created.
    12 After the --ssh-key flag, specify the path to the SSH key; for example, user/.ssh/id_rsa.
  3. Configure the service publishing strategy. By default, hosted clusters use the NodePort service publishing strategy because node ports are always available without additional infrastructure. However, you can configure the service publishing strategy to use a load balancer.

    • If you are using the default NodePort strategy, configure the DNS to point to the hosted cluster compute nodes, not the management cluster nodes. For more information, see "DNS configurations on bare metal".

    • For production environments, use the LoadBalancer strategy because it provides certificate handling and automatic DNS resolution. To change the service publishing strategy LoadBalancer, in your hosted cluster configuration file, edit the service publishing strategy details:

      ...
      spec:
        services:
        - service: APIServer
          servicePublishingStrategy:
            type: LoadBalancer (1)
        - service: Ignition
          servicePublishingStrategy:
            type: Route
        - service: Konnectivity
          servicePublishingStrategy:
            type: Route
        - service: OAuthServer
          servicePublishingStrategy:
            type: Route
        - service: OIDC
          servicePublishingStrategy:
            type: Route
        sshKey:
          name: <ssh_key>
      ...
      1 Specify LoadBalancer as the API Server type. For all other services, specify Route as the type.
  4. Apply the changes to the hosted cluster configuration file by entering the following command:

    $ oc apply -f hosted_cluster_config.yaml
  5. Monitor the creation of the hosted cluster, node pools, and pods by entering the following commands:

    $ oc get hostedcluster \
      <hosted_cluster_namespace> -n \
      <hosted_cluster_namespace> -o \
      jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
    $ oc get nodepool \
      <hosted_cluster_namespace> -n \
      <hosted_cluster_namespace> -o \
      jsonpath='{.status.conditions[?(@.status=="False")]}' | jq .
    $ oc get pods -n <hosted_cluster_namespace>
  6. Confirm that the hosted cluster is ready. The cluster is ready when its status is Available: True, the node pool status shows AllMachinesReady: True, and all cluster Operators are healthy.

  7. Install MetalLB in the hosted cluster:

    1. Extract the kubeconfig file from the hosted cluster and set the environment variable for hosted cluster access by entering the following commands:

      $ oc get secret \
        <hosted_cluster_namespace>-admin-kubeconfig \
        -n <hosted_cluster_namespace> \
        -o jsonpath='{.data.kubeconfig}' \
        | base64 -d > \
        kubeconfig-<hosted_cluster_namespace>.yaml
      $ export KUBECONFIG="/path/to/kubeconfig-<hosted_cluster_namespace>.yaml"
    2. Install the MetalLB Operator by creating the install-metallb-operator.yaml file:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: metallb-system
      ---
      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: metallb-operator
        namespace: metallb-system
      ---
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: metallb-operator
        namespace: metallb-system
      spec:
        channel: "stable"
        name: metallb-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace
        installPlanApproval: Automatic
    3. Apply the file by entering the following command:

      $ oc apply -f install-metallb-operator.yaml
    4. Configure the MetalLB IP address pool by creating the deploy-metallb-ipaddresspool.yaml file:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        name: metallb
        namespace: metallb-system
      spec:
        autoAssign: true
        addresses:
        - 10.11.176.71-10.11.176.75
      ---
      apiVersion: metallb.io/v1beta1
      kind: L2Advertisement
      metadata:
        name: l2advertisement
        namespace: metallb-system
      spec:
        ipAddressPools:
        - metallb
    5. Apply the configuration by entering the following command:

      $ oc apply -f deploy-metallb-ipaddresspool.yaml
    6. Verify that MetalLB is installed by checking the Operator status, the IP address pool, and the L2Advertisement. Enter the following commands:

      $ oc get pods -n metallb-system
      $ oc get ipaddresspool -n metallb-system
      $ oc get l2advertisement -n metallb-system
  8. Configure the load balancer for ingress:

    1. Create the ingress-loadbalancer.yaml file:

      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          metallb.universe.tf/address-pool: metallb
        name: metallb-ingress
        namespace: openshift-ingress
      spec:
        ports:
          - name: http
            protocol: TCP
            port: 80
            targetPort: 80
          - name: https
            protocol: TCP
            port: 443
            targetPort: 443
        selector:
          ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
        type: LoadBalancer
    2. Apply the configuration by entering the following command:

      $ oc apply -f ingress-loadbalancer.yaml
    3. Verify that the load balancer service works as expected by entering the following command:

      $ oc get svc metallb-ingress -n openshift-ingress
      Example output
      NAME              TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE
      metallb-ingress   LoadBalancer   172.31.127.129   10.11.176.71   80:30961/TCP,443:32090/TCP   16h
  9. Configure the DNS to work with the load balancer:

    1. Configure the DNS for the apps domain by pointing the *.apps.<hosted_cluster_namespace>.<base_domain> wildcard DNS record to the load balancer IP address.

    2. Verify the DNS resolution by entering the following command:

      $ nslookup console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain> <load_balancer_ip_address>
      Example output
      Server:         10.11.176.1
      Address:        10.11.176.1#53
      
      Name:   console-openshift-console.apps.my-hosted-cluster.sample-base-domain.com
      Address: 10.11.176.71
Verification
  1. Check the cluster Operators by entering the following command:

    $ oc get clusteroperators

    Ensure that all Operators show AVAILABLE: True, PROGRESSING: False, and DEGRADED: False.

  2. Check the nodes by entering the following command:

    $ oc get nodes

    Ensure that the status of all nodes is READY.

  3. Test access to the console by entering the following URL in a web browser:

    https://console-openshift-console.apps.<hosted_cluster_namespace>.<base_domain>

Creating an InfraEnv resource for hosted control planes on IBM Z

An InfraEnv is an environment where hosts that are booted with PXE images can join as agents. In this case, the agents are created in the same namespace as your hosted control plane.

Procedure
  1. Create a YAML file to contain the configuration. See the following example:

    apiVersion: agent-install.openshift.io/v1beta1
    kind: InfraEnv
    metadata:
      name: <hosted_cluster_name>
      namespace: <hosted_control_plane_namespace>
    spec:
      cpuArchitecture: s390x
      pullSecretRef:
        name: pull-secret
      sshAuthorizedKey: <ssh_public_key>
  2. Save the file as infraenv-config.yaml.

  3. Apply the configuration by entering the following command:

    $ oc apply -f infraenv-config.yaml
  4. To fetch the URL to download the PXE images, such as, initrd.img, kernel.img, or rootfs.img, which allows IBM Z machines to join as agents, enter the following command:

    $ oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> -o json

Adding IBM Z agents to the InfraEnv resource

To attach compute nodes to a hosted control plane, create agents that help you to scale the node pool. Adding agents in an IBM Z environment requires additional steps, which are described in detail in this section.

Unless stated otherwise, these procedures apply to both z/VM and RHEL KVM installations on IBM Z and IBM LinuxONE.

Adding IBM Z KVM as agents

For IBM Z with KVM, run the following command to start your IBM Z environment with the downloaded PXE images from the InfraEnv resource. After the Agents are created, the host communicates with the Assisted Service and registers in the same namespace as the InfraEnv resource on the management cluster.

Procedure
  1. Run the following command:

    virt-install \
       --name "<vm_name>" \ (1)
       --autostart \
       --ram=16384 \
       --cpu host \
       --vcpus=4 \
       --location "<path_to_kernel_initrd_image>,kernel=kernel.img,initrd=initrd.img" \ (2)
       --disk <qcow_image_path> \ (3)
       --network network:macvtap-net,mac=<mac_address> \ (4)
       --graphics none \
       --noautoconsole \
       --wait=-1
       --extra-args "rd.neednet=1 nameserver=<nameserver>   coreos.live.rootfs_url=http://<http_server>/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=console=tty1 console=ttyS1,115200n8" (5)
    
    1 Specify the name of the virtual machine.
    2 Specify the location of the kernel_initrd_image file.
    3 Specify the disk image path.
    4 Specify the Mac address.
    5 Specify the server name of the agents.
  2. For ISO boot, download ISO from the InfraEnv resource and boot the nodes by running the following command:

    virt-install \
      --name "<vm_name>" \ (1)
      --autostart \
      --memory=16384 \
      --cpu host \
      --vcpus=4 \
      --network network:macvtap-net,mac=<mac_address> \ (2)
      --cdrom "<path_to_image.iso>" \ (3)
      --disk <qcow_image_path> \
      --graphics none \
      --noautoconsole \
      --os-variant <os_version> \ (4)
      --wait=-1
    1 Specify the name of the virtual machine.
    2 Specify the Mac address.
    3 Specify the location of the image.iso file.
    4 Specify the operating system version that you are using.

Adding IBM Z LPAR as agents

You can add the Logical Partition (LPAR) on IBM Z or IBM LinuxONE as a compute node to a hosted control plane.

Procedure
  1. Create a boot parameter file for the agents:

    Example parameter file
    rd.neednet=1 cio_ignore=all,!condev \
    console=ttysclp0 \
    ignition.firstboot ignition.platform.id=metal
    coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \(1)
    coreos.inst.persistent-kargs=console=ttysclp0
    ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \(2)
    rd.znet=qeth,<network_adaptor_range>,layer2=1
    rd.<disk_type>=<adapter> \(3)
    zfcp.allow_lun_scan=0
    ai.ip_cfg_override=1 \(4)
    random.trust_cpu=on rd.luks.options=discard
    1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are starting. Only HTTP and HTTPS protocols are supported.
    2 For the ip parameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE.
    3 For installations on DASD-type disks, use rd.dasd to specify the DASD where Fedora CoreOS (FCOS) is to be installed. For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where FCOS is to be installed.
    4 Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets.
  2. Download the .ins and initrd.img.addrsize files from the InfraEnv resource.

    By default, the URL for the .ins and initrd.img.addrsize files is not available in the InfraEnv resource. You must edit the URL to fetch those artifacts.

    1. Update the kernel URL endpoint to include ins-file by running the followign command:

      $ curl -k -L -o generic.ins "< url for ins-file >"
      Example URL
      https://…/boot-artifacts/ins-file?arch=s390x&version=4.17.0
    2. Update the initrd URL endpoint to include s390x-initrd-addrsize:

      Example URL
      https://…./s390x-initrd-addrsize?api_key=<api-key>&arch=s390x&version=4.17.0
  3. Transfer the initrd, kernel, generic.ins, and initrd.img.addrsize parameter files to the file server. For more information about how to transfer the files with FTP and boot, see "Installing in an LPAR".

  4. Start the machine.

  5. Repeat the procedure for all other machines in the cluster.

Additional resources

Adding IBM z/VM as agents

If you want to use a static IP for z/VM guest, you must configure the NMStateConfig attribute for the z/VM agent so that the IP parameter persists in the second start.

Complete the following steps to start your IBM Z environment with the downloaded PXE images from the InfraEnv resource. After the Agents are created, the host communicates with the Assisted Service and registers in the same namespace as the InfraEnv resource on the management cluster.

Procedure
  1. Update the parameter file to add the rootfs_url, network_adaptor and disk_type values.

    Example parameter file
    rd.neednet=1 cio_ignore=all,!condev \
    console=ttysclp0  \
    ignition.firstboot ignition.platform.id=metal \
    coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \(1)
    coreos.inst.persistent-kargs=console=ttysclp0
    ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \(2)
    rd.znet=qeth,<network_adaptor_range>,layer2=1
    rd.<disk_type>=<adapter> \(3)
    zfcp.allow_lun_scan=0
    ai.ip_cfg_override=1 \(4)
    1 For the coreos.live.rootfs_url artifact, specify the matching rootfs artifact for the kernel and initramfs that you are starting. Only HTTP and HTTPS protocols are supported.
    2 For the ip parameter, manually assign the IP address, as described in Installing a cluster with z/VM on IBM Z and IBM LinuxONE.
    3 For installations on DASD-type disks, use rd.dasd to specify the DASD where Red Hat Enterprise Linux CoreOS (RHCOS) is to be installed. For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed.

    For FCP multipath configurations, provide two disks instead of one.

    Example
    rd.zfcp=<adapter1>,<wwpn1>,<lun1> \
    rd.zfcp=<adapter2>,<wwpn2>,<lun2>
    4 Specify this parameter when you use an Open Systems Adapter (OSA) or HiperSockets.
  2. Move initrd, kernel images, and the parameter file to the guest VM by running the following commands:

    vmur pun -r -u -N kernel.img $INSTALLERKERNELLOCATION/<image name>
    vmur pun -r -u -N generic.parm $PARMFILELOCATION/paramfilename
    vmur pun -r -u -N initrd.img $INSTALLERINITRAMFSLOCATION/<image name>
  3. Run the following command from the guest VM console:

    cp ipl c
  4. To list the agents and their properties, enter the following command:

    $ oc -n <hosted_control_plane_namespace> get agents
    Example output
    NAME    CLUSTER APPROVED    ROLE    STAGE
    50c23cda-cedc-9bbd-bcf1-9b3a5c75804d    auto-assign
    5e498cd3-542c-e54f-0c58-ed43e28b568a    auto-assign
  5. Run the following command to approve the agent.

    $ oc -n <hosted_control_plane_namespace> patch agent \
      50c23cda-cedc-9bbd-bcf1-9b3a5c75804d -p \
      '{"spec":{"installation_disk_id":"/dev/sda","approved":true,"hostname":"worker-zvm-0.hostedn.example.com"}}' \(1)
      --type merge
    1 Optionally, you can set the agent ID <installation_disk_id> and <hostname> in the specification.
  6. Run the following command to verify that the agents are approved:

    $ oc -n <hosted_control_plane_namespace> get agents
    Example output
    NAME                                            CLUSTER     APPROVED   ROLE          STAGE
    50c23cda-cedc-9bbd-bcf1-9b3a5c75804d             true       auto-assign
    5e498cd3-542c-e54f-0c58-ed43e28b568a             true       auto-assign

Scaling the NodePool object for a hosted cluster on IBM Z

The NodePool object is created when you create a hosted cluster. By scaling the NodePool object, you can add more compute nodes to the hosted control plane.

When you scale up a node pool, a machine is created. The Cluster API provider finds an Agent that is approved, is passing validations, is not currently in use, and meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions.

Procedure
  1. Run the following command to scale the NodePool object to two nodes:

    $ oc -n <clusters_namespace> scale nodepool <nodepool_name> --replicas 2

    The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OKD nodes. The agents pass through the transition phases in the following order:

    • binding

    • discovering

    • insufficient

    • installing

    • installing-in-progress

    • added-to-existing-cluster

  2. Run the following command to see the status of a specific scaled agent:

    $ oc -n <hosted_control_plane_namespace> get agent -o \
      jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} \
      Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'
    Example output
    BMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound
    BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient
  3. Run the following command to see the transition phases:

    $ oc -n <hosted_control_plane_namespace> get agent
    Example output
    NAME                                   CLUSTER           APPROVED       ROLE        STAGE
    50c23cda-cedc-9bbd-bcf1-9b3a5c75804d   hosted-forwarder   true          auto-assign
    5e498cd3-542c-e54f-0c58-ed43e28b568a                      true          auto-assign
    da503cf1-a347-44f2-875c-4960ddb04091   hosted-forwarder   true          auto-assign
  4. Run the following command to generate the kubeconfig file to access the hosted cluster:

    $ hcp create kubeconfig \
      --namespace <clusters_namespace> \
      --name <hosted_cluster_namespace> > <hosted_cluster_name>.kubeconfig
  5. After the agents reach the added-to-existing-cluster state, verify that you can see the OKD nodes by entering the following command:

    $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
    Example output
    NAME                             STATUS   ROLES    AGE      VERSION
    worker-zvm-0.hostedn.example.com Ready    worker   5m41s    v1.24.0+3882f8f
    worker-zvm-1.hostedn.example.com Ready    worker   6m3s     v1.24.0+3882f8f

    Cluster Operators start to reconcile by adding workloads to the nodes.

  6. Enter the following command to verify that two machines were created when you scaled up the NodePool object:

    $ oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.io
    Example output
    NAME                                CLUSTER  NODENAME PROVIDERID     PHASE     AGE   VERSION
    hosted-forwarder-79558597ff-5tbqp   hosted-forwarder-crqq5   worker-zvm-0.hostedn.example.com   agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d   Running   41h   4.15.0
    hosted-forwarder-79558597ff-lfjfk   hosted-forwarder-crqq5   worker-zvm-1.hostedn.example.com   agent://5e498cd3-542c-e54f-0c58-ed43e28b568a   Running   41h   4.15.0
  7. Run the following command to check the cluster version:

    $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co
    Example output
    NAME                                         VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
    clusterversion.config.openshift.io/version   4.15.0-ec.2   True        False         40h     Cluster version is 4.15.0-ec.2
  8. Run the following command to check the cluster operator status:

    $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperators

For each component of your cluster, the output shows the following cluster operator statuses: NAME, VERSION, AVAILABLE, PROGRESSING, DEGRADED, SINCE, and MESSAGE.

For an output example, see Initial Operator configuration.

Additional resources