This is a cache of https://docs.okd.io/latest/hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.html. It is a snapshot of the page at 2024-11-23T19:01:08.558+0000.
D<strong>e</strong>ploying host<strong>e</strong>d control plan<strong>e</strong>s on bar<strong>e</strong> m<strong>e</strong>tal in a disconn<strong>e</strong>ct<strong>e</strong>d <strong>e</strong>nvironm<strong>e</strong>nt - D<strong>e</strong>ploying host<strong>e</strong>d control plan<strong>e</strong>s in a disconn<strong>e</strong>ct<strong>e</strong>d <strong>e</strong>nvironm<strong>e</strong>nt | Host<strong>e</strong>d control plan<strong>e</strong>s | OKD 4
&times;

When you provision hosted control planes on bare metal, you use the Agent platform. The Agent platform and multicluster engine for Kubernetes Operator work together to enable disconnected deployments. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For an introduction to the central infrastructure management service, see enabling the central infrastructure management service.

Disconnected environment architecture for bare metal

The following diagram illustrates an example architecture of a disconnected environment:

Disconnected architecture diagram

  1. Configure infrastructure services, including the registry certificate deployment with TLS support, web server, and DNS, to ensure that the disconnected deployment works.

  2. Create a config map in the openshift-config namespace. In this example, the config map is named registry-config. The content of the config map is the Registry CA certificate. The data field of the config map must contain the following key/value:

    • Key: <registry_dns_domain_name>..<port>, for example, registry.hypershiftdomain.lab..5000:. ensure that you place .. after the registry DNS domain name when you specify a port.

    • Value: The certificate content

      For more information about creating a config map, see Configuring TLS certificates for a disconnected installation of hosted control planes.

  3. Modify the images.config.openshift.io custom resource (CR) specification and adds a new field named additionalTrustedCA with a value of name: registry-config.

  4. Create a config map that contains two data fields. One field contains the registries.conf file in RAW format, and the other field contains the Registry CA and is named ca-bundle.crt. The config map belongs to the multicluster-engine namespace, and the config map name is referenced in other objects. For an example of a config map, see the following sample configuration:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: custom-registries
      namespace: multicluster-engine
      labels:
        app: assisted-service
    data:
      ca-bundle.crt: |
        -----BeGIN CeRTIFICATe-----
        # ...
        -----eND CeRTIFICATe-----
      registries.conf: |
        unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
    
        [[registry]]
        prefix = ""
        location = "registry.redhat.io/openshift4"
        mirror-by-digest-only = true
    
        [[registry.mirror]]
          location = "registry.ocp-edge-cluster-0.qe.lab.redhat.com:5000/openshift4"
    
        [[registry]]
        prefix = ""
        location = "registry.redhat.io/rhacm2"
        mirror-by-digest-only = true
    # ...
    # ...
  5. In the multicluster engine Operator namespace, you create the multiclusterengine CR, which enables both the Agent and hypershift-addon add-ons. The multicluster engine Operator namespace must contain the config maps to modify behavior in a disconnected deployment. The namespace also contains the multicluster-engine, assisted-service, and hypershift-addon-manager pods.

  6. Create the following objects that are necessary to deploy the hosted cluster:

    • Secrets: Secrets contain the pull secret, SSH key, and etcd encryption key.

    • Config map: The config map contains the CA certificate of the private registry.

    • HostedCluster: The HostedCluster resource defines the configuration of the cluster that the user intends to create.

    • NodePool: The NodePool resource identifies the node pool that references the machines to use for the data plane.

  7. After you create the hosted cluster objects, the HyperShift Operator establishes the HostedControlPlane namespace to accommodate control plane pods. The namespace also hosts components such as Agents, bare metal hosts (BMHs), and the Infraenv resource. Later, you create the Infraenv resource, and after ISO creation, you create the BMHs and their secrets that contain baseboard management controller (BMC) credentials.

  8. The Metal3 Operator in the openshift-machine-api namespace inspects the new BMHs. Then, the Metal3 Operator tries to connect to the BMCs to start them by using the configured LiveISO and RootFS values that are specified through the AgentServiceConfig CR in the multicluster engine Operator namespace.

  9. After the worker nodes of the HostedCluster resource are started, an Agent container is started. This agent establishes contact with the Assisted Service, which orchestrates the actions to complete the deployment. Initially, you need to scale the NodePool resource to the number of worker nodes for the HostedCluster resource. The Assisted Service manages the remaining tasks.

  10. At this point, you wait for the deployment process to be completed.

Requirements to deploy hosted control planes on bare metal in a disconnected environment

To configure hosted control planes in a disconnected environment, you must meet the following prerequisites:

  • CPU: The number of CPUs provided determines how many hosted clusters can run concurrently. In general, use 16 CPUs for each node for 3 nodes. For minimal development, you can use 12 CPUs for each node for 3 nodes.

  • Memory: The amount of RAM affects how many hosted clusters can be hosted. Use 48 GB of RAM for each node. For minimal development, 18 GB of RAM might be sufficient.

  • Storage: Use SSD storage for multicluster engine Operator.

    • Management cluster: 250 GB.

    • Registry: The storage needed depends on the number of releases, operators, and images that are hosted. An acceptable number might be 500 GB, preferably separated from the disk that hosts the hosted cluster.

    • Web server: The storage needed depends on the number of ISOs and images that are hosted. An acceptable number might be 500 GB.

  • Production: For a production environment, separate the management cluster, the registry, and the web server on different disks. This example illustrates a possible configuration for production:

    • Registry: 2 TB

    • Management cluster: 500 GB

    • Web server: 2 TB

extracting the release image digest

You can extract the OKD release image digest by using the tagged image.

Procedure
  • Obtain the image digest by running the following command:

    $ oc adm release info <tagged_openshift_release_image> | grep "Pull From"

    Replace <tagged_openshift_release_image> with the tagged image for the supported OKD version, for example, quay.io/openshift-release-dev/ocp-release:4.14.0-x8_64.

    example output
    Pull From: quay.io/openshift-release-dev/ocp-release@sha256:69d1292f64a2b67227c5592c1a7d499c7d00376e498634ff8e1946bc9ccdddfe

Configuring the hypervisor for a disconnected installation of hosted control planes

The following information applies to virtual machine environments only.

Procedure
  1. To deploy a virtual management cluster, access the required packages by entering the following command:

    $ sudo dnf install dnsmasq radvd vim golang podman bind-utils net-tools httpd-tools tree htop strace tmux -y
  2. enable and start the Podman service by entering the following command:

    $ systemctl enable --now podman
  3. To use kcli to deploy the management cluster and other virtual components, install and configure the hypervisor by entering the following commands:

    $ sudo yum -y install libvirt libvirt-daemon-driver-qemu qemu-kvm
    $ sudo usermod -aG qemu,libvirt $(id -un)
    $ sudo newgrp libvirt
    $ sudo systemctl enable --now libvirtd
    $ sudo dnf -y copr enable karmab/kcli
    $ sudo dnf -y install kcli
    $ sudo kcli create pool -p /var/lib/libvirt/images default
    $ kcli create host kvm -H 127.0.0.1 local
    $ sudo setfacl -m u:$(id -un):rwx /var/lib/libvirt/images
    $ kcli create network  -c 192.168.122.0/24 default
  4. enable the network manager dispatcher to ensure that virtual machines can resolve the required domains, routes, and registries. To enable the network manager dispatcher, in the /etc/NetworkManager/dispatcher.d/ directory, create a script named forcedns that contains the following content:

    #!/bin/bash
    
    export IP="192.168.126.1" (1)
    export BASe_ReSOLV_CONF="/run/NetworkManager/resolv.conf"
    
    if ! [[ `grep -q "$IP" /etc/resolv.conf` ]]; then
    export TMP_FILe=$(mktemp /etc/forcedns_resolv.conf.XXXXXX)
    cp $BASe_ReSOLV_CONF $TMP_FILe
    chmod --reference=$BASe_ReSOLV_CONF $TMP_FILe
    sed -i -e "s/dns.base.domain.name//" -e "s/search /& dns.base.domain.name /" -e "0,/nameserver/s/nameserver/& $IP\n&/" $TMP_FILe (2)
    mv $TMP_FILe /etc/resolv.conf
    fi
    echo "ok"
    1 Modify the IP variable to point to the IP address of the hypervisor interface that hosts the OKD management cluster.
    2 Replace dns.base.domain.name with the DNS base domain name.
  5. After you create the file, add permissions by entering the following command:

    $ chmod 755 /etc/NetworkManager/dispatcher.d/forcedns
  6. Run the script and verify that the output returns ok.

  7. Configure ksushy to simulate baseboard management controllers (BMCs) for the virtual machines. enter the following commands:

    $ sudo dnf install python3-pyOpenSSL.noarch python3-cherrypy -y
    $ kcli create sushy-service --ssl --ipv6 --port 9000
    $ sudo systemctl daemon-reload
    $ systemctl enable --now ksushy
  8. Test whether the service is correctly functioning by entering the following command:

    $ systemctl status ksushy
  9. If you are working in a development environment, configure the hypervisor system to allow various types of connections through different virtual networks within the environment.

    If you are working in a production environment, you must establish proper rules for the firewalld service and configure SeLinux policies to maintain a secure environment.

    • For SeLinux, enter the following command:

      $ sed -i s/^SeLINUX=.*$/SeLINUX=permissive/ /etc/selinux/config; setenforce 0
    • For firewalld, enter the following command:

      $ systemctl disable --now firewalld
    • For libvirtd, enter the following commands:

      $ systemctl restart libvirtd
      $ systemctl enable --now libvirtd

DNS configurations on bare metal

The API Server for the hosted cluster is exposed as a NodePort service. A DNS entry must exist for api.<hosted_cluster_name>.<base_domain> that points to destination where the API Server can be reached.

The DNS entry can be as simple as a record that points to one of the nodes in the managed cluster that is running the hosted control plane. The entry can also point to a load balancer that is deployed to redirect incoming traffic to the ingress pods.

example DNS configuration
api.example.krnl.es.    IN A 192.168.122.20
api.example.krnl.es.    IN A 192.168.122.21
api.example.krnl.es.    IN A 192.168.122.22
api-int.example.krnl.es.    IN A 192.168.122.20
api-int.example.krnl.es.    IN A 192.168.122.21
api-int.example.krnl.es.    IN A 192.168.122.22
`*`.apps.example.krnl.es. IN A 192.168.122.23

If you are configuring DNS for a disconnected environment on an IPv6 network, the configuration looks like the following example.

example DNS configuration for an IPv6 network
api.example.krnl.es.    IN A 2620:52:0:1306::5
api.example.krnl.es.    IN A 2620:52:0:1306::6
api.example.krnl.es.    IN A 2620:52:0:1306::7
api-int.example.krnl.es.    IN A 2620:52:0:1306::5
api-int.example.krnl.es.    IN A 2620:52:0:1306::6
api-int.example.krnl.es.    IN A 2620:52:0:1306::7
`*`.apps.example.krnl.es. IN A 2620:52:0:1306::10

If you are configuring DNS for a disconnected environment on a dual stack network, be sure to include DNS entries for both IPv4 and IPv6.

example DNS configuration for a dual stack network
host-record=api-int.hub-dual.dns.base.domain.name,192.168.126.10
host-record=api.hub-dual.dns.base.domain.name,192.168.126.10
address=/apps.hub-dual.dns.base.domain.name/192.168.126.11
dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,192.168.126.20
dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,192.168.126.21
dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,192.168.126.22
dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,192.168.126.25
dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,192.168.126.26

host-record=api-int.hub-dual.dns.base.domain.name,2620:52:0:1306::2
host-record=api.hub-dual.dns.base.domain.name,2620:52:0:1306::2
address=/apps.hub-dual.dns.base.domain.name/2620:52:0:1306::3
dhcp-host=aa:aa:aa:aa:10:01,ocp-master-0,[2620:52:0:1306::5]
dhcp-host=aa:aa:aa:aa:10:02,ocp-master-1,[2620:52:0:1306::6]
dhcp-host=aa:aa:aa:aa:10:03,ocp-master-2,[2620:52:0:1306::7]
dhcp-host=aa:aa:aa:aa:10:06,ocp-installer,[2620:52:0:1306::8]
dhcp-host=aa:aa:aa:aa:10:07,ocp-bootstrap,[2620:52:0:1306::9]

Deploying a registry for hosted control planes in a disconnected environment

For development environments, deploy a small, self-hosted registry by using a Podman container. For production environments, deploy an enterprise-hosted registry, such as Red Hat Quay, Nexus, or Artifactory.

Procedure

To deploy a small registry by using Podman, complete the following steps:

  1. As a privileged user, access the ${HOMe} directory and create the following script:

    #!/usr/bin/env bash
    
    set -euo pipefail
    
    PRIMARY_NIC=$(ls -1 /sys/class/net | grep -v podman | head -1)
    export PATH=/root/bin:$PATH
    export PULL_SeCReT="/root/baremetal/hub/openshift_pull.json" (1)
    
    if [[ ! -f $PULL_SeCReT ]];then
      echo "Pull Secret not found, exiting..."
      exit 1
    fi
    
    dnf -y install podman httpd httpd-tools jq skopeo libseccomp-devel
    export IP=$(ip -o addr show $PRIMARY_NIC | head -1 | awk '{print $4}' | cut -d'/' -f1)
    ReGISTRY_NAMe=registry.$(hostname --long)
    ReGISTRY_USeR=dummy
    ReGISTRY_PASSWORD=dummy
    KeY=$(echo -n $ReGISTRY_USeR:$ReGISTRY_PASSWORD | base64)
    echo "{\"auths\": {\"$ReGISTRY_NAMe:5000\": {\"auth\": \"$KeY\", \"email\": \"sample-email@domain.ltd\"}}}" > /root/disconnected_pull.json
    mv ${PULL_SeCReT} /root/openshift_pull.json.old
    jq ".auths += {\"$ReGISTRY_NAMe:5000\": {\"auth\": \"$KeY\",\"email\": \"sample-email@domain.ltd\"}}" < /root/openshift_pull.json.old > $PULL_SeCReT
    mkdir -p /opt/registry/{auth,certs,data,conf}
    cat <<eOF > /opt/registry/conf/config.yml
    version: 0.1
    log:
      fields:
        service: registry
    storage:
      cache:
        blobdescriptor: inmemory
      filesystem:
        rootdirectory: /var/lib/registry
      delete:
        enabled: true
    http:
      addr: :5000
      headers:
        X-Content-Type-Options: [nosniff]
    health:
      storagedriver:
        enabled: true
        interval: 10s
        threshold: 3
    compatibility:
      schema1:
        enabled: true
    eOF
    openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 3650 -out /opt/registry/certs/domain.crt -subj "/C=US/ST=Madrid/L=San Bernardo/O=Karmalabs/OU=Guitar/CN=$ReGISTRY_NAMe" -addext "subjectAltName=DNS:$ReGISTRY_NAMe"
    cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/
    update-ca-trust extract
    htpasswd -bBc /opt/registry/auth/htpasswd $ReGISTRY_USeR $ReGISTRY_PASSWORD
    podman create --name registry --net host --security-opt label=disable --replace -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/conf/config.yml:/etc/docker/registry/config.yml -e "ReGISTRY_AUTH=htpasswd" -e "ReGISTRY_AUTH_HTPASSWD_ReALM=Registry" -e "ReGISTRY_HTTP_SeCReT=ALongRandomSecretForRegistry" -e ReGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v /opt/registry/certs:/certs:z -e ReGISTRY_HTTP_TLS_CeRTIFICATe=/certs/domain.crt -e ReGISTRY_HTTP_TLS_KeY=/certs/domain.key docker.io/library/registry:latest
    [ "$?" == "0" ] || !!
    systemctl enable --now registry
    1 Replace the location of the PULL_SeCReT with the appropriate location for your setup.
  2. Name the script file registry.sh and save it. When you run the script, it pulls in the following information:

    • The registry name, based on the hypervisor hostname

    • The necessary credentials and user access details

  3. Adjust permissions by adding the execution flag as follows:

    $ chmod u+x ${HOMe}/registry.sh
  4. To run the script without any parameters, enter the following command:

    $ ${HOMe}/registry.sh

    The script starts the server. The script uses a systemd service for management purposes.

  5. If you need to manage the script, you can use the following commands:

    $ systemctl status
    $ systemctl start
    $ systemctl stop

The root folder for the registry is in the /opt/registry directory and contains the following subdirectories:

  • certs contains the TLS certificates.

  • auth contains the credentials.

  • data contains the registry images.

  • conf contains the registry configuration.

Setting up a management cluster for hosted control planes in a disconnected environment

To set up an OKD management cluster, you can use dev-scripts, or if you are based on virtual machines, you can use the kcli tool. The following instructions are specific to the kcli tool.

Procedure
  1. ensure that the right networks are prepared for use in the hypervisor. The networks will host both the management and hosted clusters. enter the following kcli command:

    $ kcli create network -c 192.168.126.0/24 -P dhcp=false -P dns=false -d 2620:52:0:1306::0/64 --domain dns.base.domain.name --nodhcp dual

    where:

    • -c specifies the CIDR for the network.

    • -P dhcp=false configures the network to disable the DHCP, which is handled by the dnsmasq that you configured.

    • -P dns=false configures the network to disable the DNS, which is also handled by the dnsmasq that you configured.

    • --domain sets the domain to search.

    • dns.base.domain.name is the DNS base domain name.

    • dual is the name of the network that you are creating.

  2. After the network is created, review the following output:

    [root@hypershiftbm ~]# kcli list network
    Listing Networks...
    +---------+--------+---------------------+-------+------------------+------+
    | Network |  Type  |         Cidr        |  Dhcp |      Domain      | Mode |
    +---------+--------+---------------------+-------+------------------+------+
    | default | routed |   192.168.122.0/24  |  True |     default      | nat  |
    | ipv4    | routed | 2620:52:0:1306::/64 | False | dns.base.domain.name | nat  |
    | ipv4    | routed |   192.168.125.0/24  | False | dns.base.domain.name | nat  |
    | ipv6    | routed | 2620:52:0:1305::/64 | False | dns.base.domain.name | nat  |
    +---------+--------+---------------------+-------+------------------+------+
    [root@hypershiftbm ~]# kcli info network ipv6
    Providing information about network ipv6...
    cidr: 2620:52:0:1306::/64
    dhcp: false
    domain: dns.base.domain.name
    mode: nat
    plan: kvirt
    type: routed
  3. ensure that the pull secret and kcli plan files are in place so that you can deploy the OKD management cluster:

    1. Confirm that the pull secret is in the same folder as the kcli plan, and that the pull secret file is named openshift_pull.json.

    2. Add the kcli plan, which contains the OKD definition, in the mgmt-compact-hub-dual.yaml file. ensure that you update the file contents to match your environment:

      plan: hub-dual
      force: true
      version: stable
      tag: "4.x.y-x86_64" (1)
      cluster: "hub-dual"
      dualstack: true
      domain: dns.base.domain.name
      api_ip: 192.168.126.10
      ingress_ip: 192.168.126.11
      service_networks:
      - 172.30.0.0/16
      - fd02::/112
      cluster_networks:
      - 10.132.0.0/14
      - fd01::/48
      disconnected_url: registry.dns.base.domain.name:5000
      disconnected_update: true
      disconnected_user: dummy
      disconnected_password: dummy
      disconnected_operators_version: v4.14
      disconnected_operators:
      - name: metallb-operator
      - name: lvms-operator
        channels:
        - name: stable-4.14
      disconnected_extra_images:
      - quay.io/user-name/trbsht:latest
      - quay.io/user-name/hypershift:BMSelfManage-v4.14-rc-v3
      - registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10
      dualstack: true
      disk_size: 200
      extra_disks: [200]
      memory: 48000
      numcpus: 16
      ctlplanes: 3
      workers: 0
      manifests: extra-manifests
      metal3: true
      network: dual
      users_dev: developer
      users_devpassword: developer
      users_admin: admin
      users_adminpassword: admin
      metallb_pool: dual-virtual-network
      metallb_ranges:
      - 192.168.126.150-192.168.126.190
      metallb_autoassign: true
      apps:
      - users
      - lvms-operator
      - metallb-operator
      vmrules:
      - hub-bootstrap:
          nets:
          - name: ipv6
            mac: aa:aa:aa:aa:10:07
      - hub-ctlplane-0:
          nets:
          - name: ipv6
            mac: aa:aa:aa:aa:10:01
      - hub-ctlplane-1:
          nets:
          - name: ipv6
            mac: aa:aa:aa:aa:10:02
      - hub-ctlplane-2:
          nets:
          - name: ipv6
            mac: aa:aa:aa:aa:10:03
      1 Replace 4.x.y with the supported OKD version you want to use.
  4. To provision the management cluster, enter the following command:

    $ kcli create cluster openshift --pf mgmt-compact-hub-dual.yaml
Next steps

Next, configure the web server.

Configuring the web server for hosted control planes in a disconnected environment

You need to configure an additional web server to host the Fedora CoreOS (FCOS) images that are associated with the OKD release that you are deploying as a hosted cluster.

Procedure

To configure the web server, complete the following steps:

  1. extract the openshift-install binary from the OKD release that you want to use by entering the following command:

    $ oc adm -a ${LOCAL_SeCReT_JSON} release extract --command=openshift-install "${LOCAL_ReGISTRY}/${LOCAL_RePOSITORY}:${OCP_ReLeASe}-${ARCHITeCTURe}"
  2. Run the following script. The script creates a folder in the /opt/srv directory. The folder contains the FCOS images to provision the worker nodes.

    #!/bin/bash
    
    WeBSRV_FOLDeR=/opt/srv
    ROOTFS_IMG_URL="$(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.pxe.rootfs.location')" (1)
    LIVe_ISO_URL="$(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location')" (2)
    
    mkdir -p ${WeBSRV_FOLDeR}/images
    curl -Lk ${ROOTFS_IMG_URL} -o ${WeBSRV_FOLDeR}/images/${ROOTFS_IMG_URL##*/}
    curl -Lk ${LIVe_ISO_URL} -o ${WeBSRV_FOLDeR}/images/${LIVe_ISO_URL##*/}
    chmod -R 755 ${WeBSRV_FOLDeR}/*
    
    ## Run Webserver
    podman ps --noheading | grep -q websrv-ai
    if [[ $? == 0 ]];then
        echo "Launching Registry pod..."
        /usr/bin/podman run --name websrv-ai --net host -v /opt/srv:/usr/local/apache2/htdocs:z quay.io/alosadag/httpd:p8080
    fi
    1 You can find the ROOTFS_IMG_URL value on the OpenShift CI Release page.
    2 You can find the LIVe_ISO_URL value on the OpenShift CI Release page.

After the download is completed, a container runs to host the images on a web server. The container uses a variation of the official HTTPd image, which also enables it to work with IPv6 networks.

Configuring image mirroring for hosted control planes in a disconnected environment

Image mirroring is the process of fetching images from external registries, such as registry.redhat.com or quay.io, and storing them in your private registry.

In the following procedures, the oc-mirror tool is used, which is a binary that uses the ImageSetConfiguration object. In the file, you can specify the following information:

  • The OKD versions to mirror. The versions are in quay.io.

  • The additional Operators to mirror. Select packages individually.

  • The extra images that you want to add to the repository.

Prerequisites

ensure that the registry server is running before you start the mirroring process.

Procedure

To configure image mirroring, complete the following steps:

  1. ensure that your ${HOMe}/.docker/config.json file is updated with the registries that you are going to mirror from and with the private registry that you plan to push the images to.

  2. By using the following example, create an ImageSetConfiguration object to use for mirroring. Replace values as needed to match your environment:

    apiVersion: mirror.openshift.io/v1alpha2
    kind: ImageSetConfiguration
    storageConfig:
      registry:
        imageURL: registry.<dns.base.domain.name>:5000/openshift/release/metadata:latest (1)
    mirror:
      platform:
        channels:
        - name: candidate-4.17
          minVersion: 4.x.y-build  (2)
          maxVersion: 4.x.y-build (2)
          type: ocp
        kubeVirtContainer: true (3)
        graph: true
      additionalImages:
      - name: quay.io/karmab/origin-keepalived-ipfailover:latest
      - name: quay.io/karmab/kubectl:latest
      - name: quay.io/karmab/haproxy:latest
      - name: quay.io/karmab/mdns-publisher:latest
      - name: quay.io/karmab/origin-coredns:latest
      - name: quay.io/karmab/curl:latest
      - name: quay.io/karmab/kcli:latest
      - name: quay.io/user-name/trbsht:latest
      - name: quay.io/user-name/hypershift:BMSelfManage-v4.17
      - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10
      operators:
      - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17
        packages:
        - name: lvms-operator
        - name: local-storage-operator
        - name: odf-csi-addons-operator
        - name: odf-operator
        - name: mcg-operator
        - name: ocs-operator
        - name: metallb-operator
        - name: kubevirt-hyperconverged (4)
    1 Replace <dns.base.domain.name> with the DNS base domain name.
    2 Replace 4.x.y-build with the supported OKD version you want to use.
    3 Set this optional flag to true if you want to also mirror the container disk image for the Fedora CoreOS (FCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only.
    4 For deployments that use the KubeVirt provider, include this line.
  3. Start the mirroring process by entering the following command:

    $ oc-mirror --v2 --config imagesetconfig.yaml docker://${ReGISTRY}

    After the mirroring process is finished, you have a new folder named oc-mirror-workspace/results-XXXXXX/, which contains the IDMS and the catalog sources to apply on the hosted cluster.

  4. Mirror the nightly or CI versions of OKD by configuring the imagesetconfig.yaml file as follows:

    apiVersion: mirror.openshift.io/v2alpha1
    kind: ImageSetConfiguration
    mirror:
      platform:
        graph: true
        release: registry.ci.openshift.org/ocp/release:4.x.y-build (1)
        kubeVirtContainer: true (2)
    # ...
    1 Replace 4.x.y-build with the supported OKD version you want to use.
    2 Set this optional flag to true if you want to also mirror the container disk image for the Fedora CoreOS (FCOS) boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only.
  5. Apply the changes to the file by entering the following command:

    $ oc-mirror --v2 --config imagesetconfig.yaml docker://${ReGISTRY}
  6. Mirror the latest multicluster engine Operator images by following the steps in Install on disconnected networks.

Applying objects in the management cluster

After the mirroring process is complete, you need to apply two objects in the management cluster:

  • ImageContentSourcePolicy (ICSP) or ImageDigestMirrorSet (IDMS)

  • Catalog sources

When you use the oc-mirror tool, the output artifacts are in a folder named oc-mirror-workspace/results-XXXXXX/.

The ICSP or IDMS initiates a MachineConfig change that does not restart your nodes but restarts the kubelet on each of them. After the nodes are marked as ReADY, you need to apply the newly generated catalog sources.

The catalog sources initiate actions in the openshift-marketplace Operator, such as downloading the catalog image and processing it to retrieve all the PackageManifests that are included in that image.

Procedure
  1. To check the new sources, run the following command by using the new CatalogSource as a source:

    $ oc get packagemanifest
  2. To apply the artifacts, complete the following steps:

    1. Create the ICSP or IDMS artifacts by entering the following command:

      $ oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml
    2. Wait for the nodes to become ready, and then enter the following command:

      $ oc apply -f catalogSource-XXXXXXXX-index.yaml
  3. Mirror the OLM catalogs and configure the hosted cluster to point to the mirror.

    When you use the management (default) OLMCatalogPlacement mode, the image stream that is used for OLM catalogs is not automatically amended with override information from the ICSP on the management cluster.

    1. If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the hypershift.openshift.io/olm-catalogs-is-registry-overrides annotation to the HostedCluster resource. The format is "sr1=dr1,sr2=dr2", where the source registry string is a key and the destination registry is a value.

    2. To bypass the OLM catalog image stream mechanism, use the following four annotations on the HostedCluster resource to directly specify the addresses of the four images to use for OLM Operator catalogs:

      • hypershift.openshift.io/certified-operators-catalog-image

      • hypershift.openshift.io/community-operators-catalog-image

      • hypershift.openshift.io/redhat-marketplace-catalog-image

      • hypershift.openshift.io/redhat-operators-catalog-image

In this case, the image stream is not created, and you must update the value of the annotations when the internal mirror is refreshed to pull in Operator updates.

Next steps

Deploy the multicluster engine Operator by completing the steps in Deploying multicluster engine Operator for a disconnected installation of hosted control planes.

Deploying multicluster engine Operator for a disconnected installation of hosted control planes

The multicluster engine for Kubernetes Operator plays a crucial role in deploying clusters across providers. If you do not have multicluster engine Operator installed, review the following documentation to understand the prerequisites and steps to install it:

Deploying AgentServiceConfig resources

The AgentServiceConfig custom resource is an essential component of the Assisted Service add-on that is part of multicluster engine Operator. It is responsible for bare metal cluster deployment. When the add-on is enabled, you deploy the AgentServiceConfig resource to configure the add-on.

In addition to configuring the AgentServiceConfig resource, you need to include additional config maps to ensure that multicluster engine Operator functions properly in a disconnected environment.

Procedure
  1. Configure the custom registries by adding the following config map, which contains the disconnected details to customize the deployment:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: custom-registries
      namespace: multicluster-engine
      labels:
        app: assisted-service
    data:
      ca-bundle.crt: |
        -----BeGIN CeRTIFICATe-----
        -----eND CeRTIFICATe-----
      registries.conf: |
        unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
    
        [[registry]]
        prefix = ""
        location = "registry.redhat.io/openshift4"
        mirror-by-digest-only = true
    
        [[registry.mirror]]
          location = "registry.dns.base.domain.name:5000/openshift4" (1)
    
        [[registry]]
        prefix = ""
        location = "registry.redhat.io/rhacm2"
        mirror-by-digest-only = true
        # ...
        # ...
    1 Replace dns.base.domain.name with the DNS base domain name.

    The object contains two fields:

    • Custom CAs: This field contains the Certificate Authorities (CAs) that are loaded into the various processes of the deployment.

    • Registries: The Registries.conf field contains information about images and namespaces that need to be consumed from a mirror registry rather than the original source registry.

  2. Configure the Assisted Service by adding the AssistedServiceConfig object, as shown in the following example:

    apiVersion: agent-install.openshift.io/v1beta1
    kind: AgentServiceConfig
    metadata:
      annotations:
        unsupported.agent-install.openshift.io/assisted-service-configmap: assisted-service-config (1)
      name: agent
      namespace: multicluster-engine
    spec:
      mirrorRegistryRef:
        name: custom-registries (2)
      databaseStorage:
        storageClassName: lvms-vg1
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
      filesystemStorage:
        storageClassName: lvms-vg1
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 20Gi
      osImages: (3)
      - cpuArchitecture: x86_64 (4)
        openshiftVersion: "4.14"
        rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live-rootfs.x86_64.img (5)
        url: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live.x86_64.iso
        version: 414.92.202308281054-0
      - cpuArchitecture: x86_64
       openshiftVersion: "4.15"
       rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live-rootfs.x86_64.img
       url: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live.x86_64.iso
       version: 415.92.202403270524-0
    1 The metadata.annotations["unsupported.agent-install.openshift.io/assisted-service-configmap"] annotation references the config map name that the Operator consumes to customize behavior.
    2 The spec.mirrorRegistryRef.name annotation points to the config map that contains disconnected registry information that the Assisted Service Operator consumes. This config map adds those resources during the deployment process.
    3 The spec.osImages field contains different versions available for deployment by this Operator. This field is mandatory. This example assumes that you already downloaded the RootFS and LiveISO files.
    4 Add a cpuArchitecture subsection for every OKD release that you want to deploy. In this example, cpuArchitecture subsections are included for 4.14 and 4.15.
    5 In the rootFSUrl and url fields, replace dns.base.domain.name with the DNS base domain name.
  3. Deploy all of the objects by concatenating them into a single file and applying them to the management cluster. To do so, enter the following command:

    $ oc apply -f agentServiceConfig.yaml

    The command triggers two pods.

    example output
    assisted-image-service-0                               1/1     Running   2             11d (1)
    assisted-service-668b49548-9m7xw                       2/2     Running   5             11d (2)
    
    1 The assisted-image-service pod is responsible for creating the Red Hat enterprise Linux CoreOS (RHCOS) boot image template, which is customized for each cluster that you deploy.
    2 The assisted-service refers to the Operator.
Next steps

Configure TLS certificates by completing the steps in Configuring TLS certificates for a disconnected installation of hosted control planes.

Configuring TLS certificates for a disconnected installation of hosted control planes

To ensure proper function in a disconnected deployment, you need to configure the registry CA certificates in the management cluster and the worker nodes for the hosted cluster.

Adding the registry CA to the management cluster

To add the registry CA to the management cluster, complete the following steps.

Procedure
  1. Create a config map that resembles the following example:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: <config_map_name> (1)
      namespace: <config_map_namespace> (2)
    data: (3)
      <registry_name>..<port>: | (4)
        -----BeGIN CeRTIFICATe-----
        -----eND CeRTIFICATe-----
      <registry_name>..<port>: |
        -----BeGIN CeRTIFICATe-----
        -----eND CeRTIFICATe-----
      <registry_name>..<port>: |
        -----BeGIN CeRTIFICATe-----
        -----eND CeRTIFICATe-----
    1 Specify the name of the config map.
    2 Specify the namespace for the config map.
    3 In the data field, specify the registry names and the registry certificate content. Replace <port> with the port where the registry server is running; for example, 5000.
    4 ensure that the data in the config map is defined by using | only instead of other methods, such as | -. If you use other methods, issues can occur when the pod reads the certificates.
  2. Patch the cluster-wide object, image.config.openshift.io to include the following specification:

    spec:
      additionalTrustedCA:
        - name: registry-config

    As a result of this patch, the control plane nodes can retrieve images from the private registry and the HyperShift Operator can extract the OKD payload for hosted cluster deployments.

    The process to patch the object might take several minutes to be completed.

Adding the registry CA to the worker nodes for the hosted cluster

In order for the data plane workers in the hosted cluster to be able to retrieve images from the private registry, you need to add the registry CA to the worker nodes.

Procedure
  1. In the hc.spec.additionalTrustBundle file, add the following specification:

    spec:
      additionalTrustBundle:
        - name: user-ca-bundle (1)
    1 The user-ca-bundle entry is a config map that you create in the next step.
  2. In the same namespace where the HostedCluster object is created, create the user-ca-bundle config map. The config map resembles the following example:

    apiVersion: v1
    data:
      ca-bundle.crt: |
        // Registry1 CA
        -----BeGIN CeRTIFICATe-----
        -----eND CeRTIFICATe-----
    
        // Registry2 CA
        -----BeGIN CeRTIFICATe-----
        -----eND CeRTIFICATe-----
    
        // Registry3 CA
        -----BeGIN CeRTIFICATe-----
        -----eND CeRTIFICATe-----
    
    kind: ConfigMap
    metadata:
      name: user-ca-bundle
      namespace: <hosted_cluster_namespace> (1)
    1 Specify the namespace where the HostedCluster object is created.

Creating a hosted cluster on bare metal

A hosted cluster is an OKD cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane.

Deploying hosted cluster objects

Typically, the HyperShift Operator creates the HostedControlPlane namespace. However, in this case, you want to include all the objects before the HyperShift Operator begins to reconcile the HostedCluster object. Then, when the Operator starts the reconciliation process, it can find all of the objects in place.

Procedure
  1. Create a YAML file with the following information about the namespaces:

    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      creationTimestamp: null
      name: <hosted_cluster_namespace>-<hosted_cluster_name> (1)
    spec: {}
    status: {}
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      creationTimestamp: null
      name: <hosted_cluster_namespace> (2)
    spec: {}
    status: {}
    1 Replace <hosted_cluster_name> with your hosted cluster.
    2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace.
  2. Create a YAML file with the following information about the config maps and secrets to include in the HostedCluster deployment:

    ---
    apiVersion: v1
    data:
      ca-bundle.crt: |
        -----BeGIN CeRTIFICATe-----
        -----eND CeRTIFICATe-----
    kind: ConfigMap
    metadata:
      name: user-ca-bundle
      namespace: <hosted_cluster_namespace> (1)
    ---
    apiVersion: v1
    data:
      .dockerconfigjson: xxxxxxxxx
    kind: Secret
    metadata:
      creationTimestamp: null
      name: <hosted_cluster_name>-pull-secret (2)
      namespace: <hosted_cluster_namespace> (1)
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: sshkey-cluster-<hosted_cluster_name> (2)
      namespace: <hosted_cluster_namespace> (1)
    stringData:
      id_rsa.pub: ssh-rsa xxxxxxxxx
    ---
    apiVersion: v1
    data:
      key: nTPtVBet03owkrKhIdmSW8jrWRxU57KO/fnZa8oaG0Y=
    kind: Secret
    metadata:
      creationTimestamp: null
      name: <hosted_cluster_name>-etcd-encryption-key (2)
      namespace: <hosted_cluster_namespace> (1)
    type: Opaque
    1 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace.
    2 Replace <hosted_cluster_name> with your hosted cluster.
  3. Create a YAML file that contains the RBAC roles so that Assisted Service agents can be in the same HostedControlPlane namespace as the hosted control plane and still be managed by the cluster API:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      creationTimestamp: null
      name: capi-provider-role
      namespace: <hosted_cluster_namespace>-<hosted_cluster_name>  (1) (2)
    rules:
    - apiGroups:
      - agent-install.openshift.io
      resources:
      - agents
      verbs:
      - '*'
    1 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace.
    2 Replace <hosted_cluster_name> with your hosted cluster.
  4. Create a YAML file with information about the HostedCluster object, replacing values as necessary:

    apiVersion: hypershift.openshift.io/v1beta1
    kind: HostedCluster
    metadata:
      name: <hosted_cluster_name> (1)
      namespace: <hosted_cluster_namespace> (2)
    spec:
      additionalTrustBundle:
        name: "user-ca-bundle"
      olmCatalogPlacement: guest
      imageContentSources: (3)
      - source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
        mirrors:
        - registry.<dns.base.domain.name>:5000/openshift/release (4)
      - source: quay.io/openshift-release-dev/ocp-release
        mirrors:
        - registry.<dns.base.domain.name>:5000/openshift/release-images (4)
      - mirrors:
      ...
      ...
      autoscaling: {}
      controllerAvailabilityPolicy: SingleReplica
      dns:
        baseDomain: <dns.base.domain.name> (4)
      etcd:
        managed:
          storage:
            persistentVolume:
              size: 8Gi
            restoreSnapshotURL: null
            type: PersistentVolume
        managementType: Managed
      fips: false
      networking:
        clusterNetwork:
        - cidr: 10.132.0.0/14
        - cidr: fd01::/48
        networkType: OVNKubernetes
        serviceNetwork:
        - cidr: 172.31.0.0/16
        - cidr: fd02::/112
      platform:
        agent:
          agentNamespace: <hosted_cluster_namespace>-<hosted_cluster_name>  (1) (2)
        type: Agent
      pullSecret:
        name: <hosted_cluster_name>-pull-secret (1)
      release:
        image: registry.<dns.base.domain.name>:5000/openshift/release-images:4.x.y-x86_64  (4) (5)
      secretencryption:
        aescbc:
          activeKey:
            name: <hosted_cluster_name>-etcd-encryption-key (1)
        type: aescbc
      services:
      - service: APIServer
        servicePublishingStrategy:
          type: LoadBalancer
      - service: OAuthServer
        servicePublishingStrategy:
          type: Route
      - service: OIDC
        servicePublishingStrategy:
          type: Route
      - service: Konnectivity
        servicePublishingStrategy:
          type: Route
      - service: Ignition
        servicePublishingStrategy:
          type: Route
      sshKey:
        name: sshkey-cluster-<hosted_cluster_name> (1)
    status:
      controlPlaneendpoint:
        host: ""
        port: 0
    1 Replace <hosted_cluster_name> with your hosted cluster.
    2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace.
    3 The imageContentSources section contains mirror references for user workloads within the hosted cluster.
    4 Replace <dns.base.domain.name> with the DNS base domain name.
    5 Replace 4.x.y with the supported OKD version you want to use.
  5. Add an annotation in the HostedCluster object that points to the HyperShift Operator release in the OKD release:

    1. Obtain the image payload by entering the following command:

      $ oc adm release info registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-release:4.x.y-x86_64 | grep hypershift

      where <dns.base.domain.name> is the DNS base domain name and 4.x.y is the supported OKD version you want to use.

      example output
      hypershift        sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8
    2. By using the OKD Images namespace, check the digest by entering the following command:

      podman pull registry.<dns.base.domain.name>:5000/openshift-release-dev/ocp-v4.0-art-dev@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8

      where <dns.base.domain.name> is the DNS base domain name.

      example output
      podman pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8
      Trying to pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8...
      Getting image source signatures
      Copying blob d8190195889e skipped: already exists
      Copying blob c71d2589fba7 skipped: already exists
      Copying blob d4dc6e74b6ce skipped: already exists
      Copying blob 97da74cc6d8f skipped: already exists
      Copying blob b70007a560c9 done
      Copying config 3a62961e6e done
      Writing manifest to image destination
      Storing signatures
      3a62961e6ed6edab46d5ec8429ff1f41d6bb68de51271f037c6cb8941a007fde

      The release image that is set in the HostedCluster object must use the digest rather than the tag; for example, quay.io/openshift-release-dev/ocp-release@sha256:e3ba11bd1e5e8ea5a0b36a75791c90f29afb0fdbe4125be4e48f69c76a5c47a0.

  6. Create all of the objects that you defined in the YAML files by concatenating them into a file and applying them against the management cluster. To do so, enter the following command:

    $ oc apply -f 01-4.14-hosted_cluster-nodeport.yaml
    example output
    NAMe                                                  ReADY   STATUS    ReSTARTS   AGe
    capi-provider-5b57dbd6d5-pxlqc                        1/1     Running   0          3m57s
    catalog-operator-9694884dd-m7zzv                      2/2     Running   0          93s
    cluster-api-f98b9467c-9hfrq                           1/1     Running   0          3m57s
    cluster-autoscaler-d7f95dd5-d8m5d                     1/1     Running   0          93s
    cluster-image-registry-operator-5ff5944b4b-648ht      1/2     Running   0          93s
    cluster-network-operator-77b896ddc-wpkq8              1/1     Running   0          94s
    cluster-node-tuning-operator-84956cd484-4hfgf         1/1     Running   0          94s
    cluster-policy-controller-5fd8595d97-rhbwf            1/1     Running   0          95s
    cluster-storage-operator-54dcf584b5-xrnts             1/1     Running   0          93s
    cluster-version-operator-9c554b999-l22s7              1/1     Running   0          95s
    control-plane-operator-6fdc9c569-t7hr4                1/1     Running   0          3m57s
    csi-snapshot-controller-785c6dc77c-8ljmr              1/1     Running   0          77s
    csi-snapshot-controller-operator-7c6674bc5b-d9dtp     1/1     Running   0          93s
    csi-snapshot-webhook-5b8584875f-2492j                 1/1     Running   0          77s
    dns-operator-6874b577f-9tc6b                          1/1     Running   0          94s
    etcd-0                                                3/3     Running   0          3m39s
    hosted-cluster-config-operator-f5cf5c464-4nmbh        1/1     Running   0          93s
    ignition-server-6b689748fc-zdqzk                      1/1     Running   0          95s
    ignition-server-proxy-54d4bb9b9b-6zkg7                1/1     Running   0          95s
    ingress-operator-6548dc758b-f9gtg                     1/2     Running   0          94s
    konnectivity-agent-7767cdc6f5-tw782                   1/1     Running   0          95s
    kube-apiserver-7b5799b6c8-9f5bp                       4/4     Running   0          3m7s
    kube-controller-manager-5465bc4dd6-zpdlk              1/1     Running   0          44s
    kube-scheduler-5dd5f78b94-bbbck                       1/1     Running   0          2m36s
    machine-approver-846c69f56-jxvfr                      1/1     Running   0          92s
    oauth-openshift-79c7bf44bf-j975g                      2/2     Running   0          62s
    olm-operator-767f9584c-4lcl2                          2/2     Running   0          93s
    openshift-apiserver-5d469778c6-pl8tj                  3/3     Running   0          2m36s
    openshift-controller-manager-6475fdff58-hl4f7         1/1     Running   0          95s
    openshift-oauth-apiserver-dbbc5cc5f-98574             2/2     Running   0          95s
    openshift-route-controller-manager-5f6997b48f-s9vdc   1/1     Running   0          95s
    packageserver-67c87d4d4f-kl7qh                        2/2     Running   0          93s

    When the hosted cluster is available, the output looks like the following example.

    example output
    NAMeSPACe   NAMe         VeRSION   KUBeCONFIG                PROGReSS   AVAILABLe   PROGReSSING   MeSSAGe
    clusters    hosted-dual            hosted-admin-kubeconfig   Partial    True          False         The hosted control plane is available

Creating a NodePool object for the hosted cluster

A NodePool is a scalable set of worker nodes that is associated with a hosted cluster. NodePool machine architectures remain consistent within a specific pool and are independent of the machine architecture of the control plane.

Procedure
  1. Create a YAML file with the following information about the NodePool object, replacing values as necessary:

    apiVersion: hypershift.openshift.io/v1beta1
    kind: NodePool
    metadata:
      creationTimestamp: null
      name: <hosted_cluster_name> \(1)
      namespace: <hosted_cluster_namespace> \(2)
    spec:
      arch: amd64
      clusterName: <hosted_cluster_name>
      management:
        autoRepair: false \(3)
        upgradeType: InPlace \(4)
      nodeDrainTimeout: 0s
      platform:
        type: Agent
      release:
        image: registry.<dns.base.domain.name>:5000/openshift/release-images:4.x.y-x86_64 \(5)
      replicas: 2 (6)
    status:
      replicas: 2
    1 Replace <hosted_cluster_name> with your hosted cluster.
    2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace.
    3 The autoRepair field is set to false because the node will not be re-created if it is removed.
    4 The upgradeType is set to InPlace, which indicates that the same bare metal node is reused during an upgrade.
    5 All of the nodes included in this NodePool are based on the following OKD version: 4.x.y-x86_64. Replace the <dns.base.domain.name> value with your DNS base domain name and the 4.x.y value with the supported OKD version you want to use.
    6 You can set the replicas value to 2 to create two node pool replicas in your hosted cluster.
  2. Create the NodePool object by entering the following command:

    $ oc apply -f 02-nodepool.yaml
    example output
    NAMeSPACe   NAMe          CLUSTeR   DeSIReD NODeS   CURReNT NODeS   AUTOSCALING   AUTORePAIR   VeRSION                              UPDATINGVeRSION   UPDATINGCONFIG   MeSSAGe
    clusters    hosted-dual   hosted    0                               False         False        4.x.y-x86_64

Creating an Infraenv resource for the hosted cluster

The Infraenv resource is an Assisted Service object that includes essential details, such as the pullSecretRef and the sshAuthorizedKey. Those details are used to create the Red Hat enterprise Linux CoreOS (RHCOS) boot image that is customized for the hosted cluster.

You can host more than one Infraenv resource, and each one can adopt certain types of hosts. For example, you might want to divide your server farm between a host that has greater RAM capacity.

Procedure
  1. Create a YAML file with the following information about the Infraenv resource, replacing values as necessary:

    apiVersion: agent-install.openshift.io/v1beta1
    kind: Infraenv
    metadata:
      name: <hosted_cluster_name>
      namespace: <hosted-cluster-namespace>-<hosted_cluster_name>  (1) (2)
    spec:
      pullSecretRef: (3)
        name: pull-secret
      sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2eAAAADAQABAAABgQDk7ICaUe+/k4zTpxLk4+xFdHi4ZuDi5qjeF52afsNkw0w/glILHhwpL5gnp5WkRuL8GwJuZ1VqLC9eKrdmegn4MrmUlq7WTsP0VFOZFBfq2XRUxo1wrRdor2z0Bbh93ytR+ZsDbbLlGngXaMa0Vbt+z74FqlcajbHTZ6zBmTpBVq5RHtDPgKITdpe1fongp7+ZXQNBlkaavaqv8bnyrP4BWahLP4iO9/xJF9lQYboYweeDzmnKLMW1VtCe6nJzegWCufACTbxpNS7GvKtoHT/OVzw8AreXhZXQUS1UY8zKsX2iXwmyhw5Sj6YboA8WICs4z+TrFP89LmxXY0j6536TQFyRz1iB4WWvCbH5n6W+ABV2e8ssJB1Amey8QYNwpJQJNpSxzoKBjI73XxvPYYC/IjPFMySwZqrSZCkJYqQ023ySkaQxWZT7in4KeMu7eS2tC+Kn4deJ7KwwUycx8n6RHMeD8Qg9flTHCv3gmab8JKZJqN3hW1D378JuvmIX4V0= (4)
    1 Replace <hosted_cluster_name> with your hosted cluster.
    2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace.
    3 The pullSecretRef refers to the config map reference in the same namespace as the Infraenv, where the pull secret is used.
    4 The sshAuthorizedKey represents the SSH public key that is placed in the boot image. The SSH key allows access to the worker nodes as the core user.
  2. Create the Infraenv resource by entering the following command:

    $ oc apply -f 03-infraenv.yaml
    example output
    NAMeSPACe              NAMe     ISO CReATeD AT
    clusters-hosted-dual   hosted   2023-09-11T15:14:10Z

Creating worker nodes for the hosted cluster

If you are working on a bare metal platform, creating worker nodes is crucial to ensure that the details in the BareMetalHost are correctly configured.

If you are working with virtual machines, you can complete the following steps to create empty worker nodes for the Metal3 Operator to consume. To do so, you use the kcli tool.

Procedure
  1. If this is not your first attempt to create worker nodes, you must first delete your previous setup. To do so, delete the plan by entering the following command:

    $ kcli delete plan <hosted_cluster_name> (1)
    1 Replace <hosted_cluster_name> with the name of your hosted cluster.
    1. When you are prompted to confirm whether you want to delete the plan, type y.

    2. Confirm that you see a message stating that the plan was deleted.

  2. Create the virtual machines by entering the following commands:

    1. enter the following command to create the first virtual machine:

      $ kcli create vm \
        -P start=False \(1)
        -P uefi_legacy=true \(2)
        -P plan=<hosted_cluster_name> \(3)
        -P memory=8192 -P numcpus=16 \(4)
        -P disks=[200,200] \(5)
        -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:01\"}"] \(6)
        -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1101 \
        -P name=<hosted_cluster_name>-worker0 (7)
      1 Include start=False if you do not want the virtual machine (VM) to automatically start upon creation.
      2 Include uefi_legacy=true to indicate that you will use UeFI legacy boot to ensure compatibility with previous UeFI implementations.
      3 Replace <hosted_cluster_name> with the name of your hosted cluster. The plan=<hosted_cluster_name> statement indicates the plan name, which identifies a group of machines as a cluster.
      4 Include the memory=8192 and numcpus=16 parameters to specify the resources for the VM, including the RAM and CPU.
      5 Include disks=[200,200] to indicate that you are creating two thin-provisioned disks in the VM.
      6 Include nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}] to provide network details, including the network name to connect to, the type of network (ipv4, ipv6, or dual), and the MAC address of the primary interface.
      7 Replace <hosted_cluster_name> with the name of your hosted cluster.
    2. enter the following command to create the second virtual machine:

      $ kcli create vm \
        -P start=False \(1)
        -P uefi_legacy=true \(2)
        -P plan=<hosted_cluster_name> \(3)
        -P memory=8192 -P numcpus=16 \(4)
        -P disks=[200,200] \(5)
        -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:02\"}"] \(6)
        -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1102
        -P name=<hosted_cluster_name>-worker1 (7)
      1 Include start=False if you do not want the virtual machine (VM) to automatically start upon creation.
      2 Include uefi_legacy=true to indicate that you will use UeFI legacy boot to ensure compatibility with previous UeFI implementations.
      3 Replace <hosted_cluster_name> with the name of your hosted cluster. The plan=<hosted_cluster_name> statement indicates the plan name, which identifies a group of machines as a cluster.
      4 Include the memory=8192 and numcpus=16 parameters to specify the resources for the VM, including the RAM and CPU.
      5 Include disks=[200,200] to indicate that you are creating two thin-provisioned disks in the VM.
      6 Include nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}] to provide network details, including the network name to connect to, the type of network (ipv4, ipv6, or dual), and the MAC address of the primary interface.
      7 Replace <hosted_cluster_name> with the name of your hosted cluster.
    3. enter the following command to create the third virtual machine:

      $ kcli create vm \
        -P start=False \(1)
        -P uefi_legacy=true \(2)
        -P plan=<hosted_cluster_name> \(3)
        -P memory=8192 -P numcpus=16 \(4)
        -P disks=[200,200] \(5)
        -P nets=["{\"name\": \"<network>\", \"mac\": \"aa:aa:aa:aa:11:03\"}"] \(6)
        -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1103
        -P name=<hosted_cluster_name>-worker2 (7)
      1 Include start=False if you do not want the virtual machine (VM) to automatically start upon creation.
      2 Include uefi_legacy=true to indicate that you will use UeFI legacy boot to ensure compatibility with previous UeFI implementations.
      3 Replace <hosted_cluster_name> with the name of your hosted cluster. The plan=<hosted_cluster_name> statement indicates the plan name, which identifies a group of machines as a cluster.
      4 Include the memory=8192 and numcpus=16 parameters to specify the resources for the VM, including the RAM and CPU.
      5 Include disks=[200,200] to indicate that you are creating two thin-provisioned disks in the VM.
      6 Include nets=[{"name": "<network>", "mac": "aa:aa:aa:aa:02:13"}] to provide network details, including the network name to connect to, the type of network (ipv4, ipv6, or dual), and the MAC address of the primary interface.
      7 Replace <hosted_cluster_name> with the name of your hosted cluster.
  3. enter the restart ksushy command to restart the ksushy tool to ensure that the tool detects the VMs that you added:

    $ systemctl restart ksushy
    example output
    +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+
    |         Name        | Status |         Ip        |                       Source                       |     Plan    | Profile |
    +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+
    |    hosted-worker0   |  down  |                   |                                                    | hosted-dual |  kvirt  |
    |    hosted-worker1   |  down  |                   |                                                    | hosted-dual |  kvirt  |
    |    hosted-worker2   |  down  |                   |                                                    | hosted-dual |  kvirt  |
    +---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+

Creating bare metal hosts for the hosted cluster

A bare metal host is an openshift-machine-api object that encompasses physical and logical details so that it can be identified by a Metal3 Operator. Those details are associated with other Assisted Service objects, known as agents.

Prerequisites

Before you create the bare metal host and destination nodes, you must have the destination machines ready.

Procedure

To create a bare metal host, complete the following steps:

  1. Create a YAML file with the following information:

    Because you have at least one secret that holds the bare metal host credentials, you need to create at least two objects for each worker node.

    apiVersion: v1
    kind: Secret
    metadata:
      name: <hosted_cluster_name>-worker0-bmc-secret \(1)
      namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \(2)
    data:
      password: YWRtaW4= \(3)
      username: YWRtaW4= \(4)
    type: Opaque
    # ...
    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHost
    metadata:
      name: <hosted_cluster_name>-worker0
      namespace: <hosted_cluster_namespace>-<hosted_cluster_name> \(2)
      labels:
        infraenvs.agent-install.openshift.io: <hosted_cluster_name> \(5)
      annotations:
        inspect.metal3.io: disabled
        bmac.agent-install.openshift.io/hostname: <hosted_cluster_name>-worker0 \(6)
    spec:
      automatedCleaningMode: disabled \(7)
      bmc:
        disableCertificateVerification: true \(8)
        address: redfish-virtualmedia://[192.168.126.1]:9000/redfish/v1/Systems/local/<hosted_cluster_name>-worker0 \(9)
        credentialsName: <hosted_cluster_name>-worker0-bmc-secret \(10)
      bootMACAddress: aa:aa:aa:aa:02:11 \(11)
      online: true (12)
    1 Replace <hosted_cluster_name> with your hosted cluster.
    2 Replace <hosted_cluster_name> with your hosted cluster. Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace.
    3 Specify the password of the baseboard management controller (BMC) in Base64 format.
    4 Specify the user name of the BMC in Base64 format.
    5 Replace <hosted_cluster_name> with your hosted cluster. The infraenvs.agent-install.openshift.io field serves as the link between the Assisted Installer and the BareMetalHost objects.
    6 Replace <hosted_cluster_name> with your hosted cluster. The bmac.agent-install.openshift.io/hostname field represents the node name that is adopted during deployment.
    7 The automatedCleaningMode field prevents the node from being erased by the Metal3 Operator.
    8 The disableCertificateVerification field is set to true to bypass certificate validation from the client.
    9 Replace <hosted_cluster_name> with your hosted cluster. The address field denotes the BMC address of the worker node.
    10 Replace <hosted_cluster_name> with your hosted cluster. The credentialsName field points to the secret where the user and password credentials are stored.
    11 The bootMACAddress field indicates the interface MAC address that the node starts from.
    12 The online field defines the state of the node after the BareMetalHost object is created.
  2. Deploy the BareMetalHost object by entering the following command:

    $ oc apply -f 04-bmh.yaml

    During the process, you can view the following output:

    • This output indicates that the process is trying to reach the nodes:

      example output
      NAMeSPACe         NAMe             STATe         CONSUMeR   ONLINe   eRROR   AGe
      clusters-hosted   hosted-worker0   registering              true             2s
      clusters-hosted   hosted-worker1   registering              true             2s
      clusters-hosted   hosted-worker2   registering              true             2s
    • This output indicates that the nodes are starting:

      example output
      NAMeSPACe         NAMe             STATe          CONSUMeR   ONLINe   eRROR   AGe
      clusters-hosted   hosted-worker0   provisioning              true             16s
      clusters-hosted   hosted-worker1   provisioning              true             16s
      clusters-hosted   hosted-worker2   provisioning              true             16s
    • This output indicates that the nodes started successfully:

      example output
      NAMeSPACe         NAMe             STATe         CONSUMeR   ONLINe   eRROR   AGe
      clusters-hosted   hosted-worker0   provisioned              true             67s
      clusters-hosted   hosted-worker1   provisioned              true             67s
      clusters-hosted   hosted-worker2   provisioned              true             67s
  3. After the nodes start, notice the agents in the namespace, as shown in this example:

    example output
    NAMeSPACe         NAMe                                   CLUSTeR   APPROVeD   ROLe          STAGe
    clusters-hosted   aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411             true       auto-assign
    clusters-hosted   aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412             true       auto-assign
    clusters-hosted   aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413             true       auto-assign

    The agents represent nodes that are available for installation. To assign the nodes to a hosted cluster, scale up the node pool.

Scaling up the node pool

After you create the bare metal hosts, their statuses change from Registering to Provisioning to Provisioned. The nodes start with the LiveISO of the agent and a default pod that is named agent. That agent is responsible for receiving instructions from the Assisted Service Operator to install the OKD payload.

Procedure
  1. To scale up the node pool, enter the following command:

    $ oc -n <hosted_cluster_namespace> scale nodepool <hosted_cluster_name> --replicas 3

    where:

    • <hosted_cluster_namespace> is the name of the hosted cluster namespace.

    • <hosted_cluster_name> is the name of the hosted cluster.

  2. After the scaling process is complete, notice that the agents are assigned to a hosted cluster:

    example output
    NAMeSPACe         NAMe                                   CLUSTeR   APPROVeD   ROLe          STAGe
    clusters-hosted   aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411   hosted    true       auto-assign
    clusters-hosted   aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412   hosted    true       auto-assign
    clusters-hosted   aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413   hosted    true       auto-assign
  3. Also notice that the node pool replicas are set:

    example output
    NAMeSPACe   NAMe     CLUSTeR   DeSIReD NODeS   CURReNT NODeS   AUTOSCALING   AUTORePAIR   VeRSION       UPDATINGVeRSION   UPDATINGCONFIG   MeSSAGe
    clusters    hosted   hosted    3                               False         False        4.x.y-x86_64                                     Minimum availability requires 3 replicas, current 0 available

    Replace 4.x.y with the supported OKD version that you want to use.

  4. Wait until the nodes join the cluster. During the process, the agents provide updates on their stage and status.