This is a cache of https://docs.openshift.com/container-platform/4.12/installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.html. It is a snapshot of the page at 2024-11-25T11:11:30.880+0000.
Preparing to install with Agent-based installer - Installing an on-premise cluster with the Agent-based Installer | Installing | OpenShift Container Platform 4.12
×

About the Agent-based Installer

The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster, with an available release image.

The configuration is in the same format as for the installer-provisioned infrastructure and user-provisioned infrastructure installation methods. The Agent-based Installer can also optionally generate or accept Zero Touch Provisioning (ZTP) custom resources. ZTP allows you to provision new edge sites with declarative configurations of bare-metal equipment.

Understanding Agent-based Installer

As an OpenShift Container Platform user, you can leverage the advantages of the Assisted Installer hosted service in disconnected environments.

The Agent-based installation comprises a bootable ISO that contains the Assisted discovery agent and the Assisted Service. Both are required to perform the cluster installation, but the latter runs on only one of the hosts.

The openshift-install agent create image subcommand generates an ephemeral ISO based on the inputs that you provide. You can choose to provide inputs through the following manifests:

Preferred:

  • install-config.yaml

  • agent-config.yaml

or

Optional: ZTP manifests

  • cluster-manifests/cluster-deployment.yaml

  • cluster-manifests/agent-cluster-install.yaml

  • cluster-manifests/pull-secret.yaml

  • cluster-manifests/infraenv.yaml

  • cluster-manifests/cluster-image-set.yaml

  • cluster-manifests/nmstateconfig.yaml

  • mirror/registries.conf

  • mirror/ca-bundle.crt

Agent-based Installer workflow

One of the control plane hosts runs the Assisted Service at the start of the boot process and eventually becomes the bootstrap host. This node is called the rendezvous host (node 0). The Assisted Service ensures that all the hosts meet the requirements and triggers an OpenShift Container Platform cluster deployment. All the nodes have the Red Hat Enterprise Linux CoreOS (RHCOS) image written to the disk. The non-bootstrap nodes reboot and initiate a cluster deployment. Once the nodes are rebooted, the rendezvous host reboots and joins the cluster. The bootstrapping is complete and the cluster is deployed.

Agent-based installer workflow
Figure 1. Node installation workflow

You can install a disconnected OpenShift Container Platform cluster through the openshift-install agent create image subcommand for the following topologies:

  • A single-node OpenShift Container Platform cluster (SNO): A node that is both a master and worker.

  • A three-node OpenShift Container Platform cluster : A compact cluster that has three master nodes that are also worker nodes.

  • Highly available OpenShift Container Platform cluster (HA): Three master nodes with any number of worker nodes.

Recommended cluster resources for the following topologies:

Table 1. Recommended cluster resources
Topology Number of master nodes Number of worker nodes vCPU Memory Storage

Single-node cluster

1

0

8 vCPUs

16GB of RAM

120GB

Compact cluster

3

0 or 1

8 vCPUs

16GB of RAM

120GB

HA cluster

3

2 and above

8 vCPUs

16GB of RAM

120GB

The following platforms are supported:

  • baremetal

  • vsphere

  • none

    The none option is supported for only single-node OpenShift clusters with an OVNKubernetes network type.

About FIPS compliance

For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization’s corporate governance framework. Federal Information Processing Standards (FIPS) compliance is one of the most critical components required in highly secure environments to ensure that only supported cryptographic technologies are allowed on nodes.

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. The use of FIPS validated or Modules In Process cryptographic libraries is supported on OpenShift Container Platform deployments on the x86_64, ppc64le, and s390x architectures.

Configuring FIPS through the Agent-based Installer

During a cluster deployment, the Federal Information Processing Standards (FIPS) change is applied when the Red Hat Enterprise Linux CoreOS (RHCOS) machines are deployed in your cluster. For Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines.

You can enable FIPS mode through the preferred method of install-config.yaml and agent-config.yaml:

  1. You must set value of the fips field to True in the install-config.yaml file:

    Sample install-config.yaml.file
    apiVersion: v1
    baseDomain: test.example.com
    metadata:
      name: sno-cluster
    fips: True
  2. Optional: If you are using the ZTP manifests, you must set the value of fips as True in the Agent-install.openshift.io/install-config-overrides field in the agent-cluster-install.yaml file:

    Sample agent-cluster-install.yaml file
    apiVersion: extensions.hive.openshift.io/v1beta1
    kind: AgentClusterInstall
    metadata:
      annotations:
        agent-install.openshift.io/install-config-overrides: '{"fips": True}'
      name: sno-cluster
      namespace: sno-cluster-test

About networking

The rendezvous IP must be known at the time of generating the agent ISO, so that during the initial boot all the hosts can check in to the assisted service. If the IP addresses are assigned using a Dynamic Host Configuration Protocol (DHCP) server, then the rendezvousIP field must be set to an IP address of one of the hosts that will become part of the deployed control plane. In an environment without a DHCP server, you can define IP addresses statically.

In addition to static IP addresses, you can apply any network configuration that is in NMState format. This includes VLANs and NIC bonds.

DHCP

Preferred method: install-config.yaml and agent-config.yaml

You must specify the value for the rendezvousIP field. The networkConfig fields can be left blank:

Sample agent-config.yaml.file
apiVersion: v1alpha1
kind: AgentConfig
metadata:
  name: sno-cluster
rendezvousIP: 192.168.111.80 (1)
1 The IP address for the rendezvous host.

Static networking

  1. Preferred method: install-config.yaml and agent-config.yaml

    Sample agent-config.yaml.file
      cat > agent-config.yaml << EOF
      apiVersion: v1alpha1
      kind: AgentConfig
      metadata:
        name: sno-cluster
      rendezvousIP: 192.168.111.80 (1)
      hosts:
        - hostname: master-0
          interfaces:
            - name: eno1
              macAddress: 00:ef:44:21:e6:a5 (2)
          networkConfig:
            interfaces:
              - name: eno1
                type: ethernet
                state: up
                mac-address: 00:ef:44:21:e6:a5
                ipv4:
                  enabled: true
                  address:
                    - ip: 192.168.111.80 (3)
                      prefix-length: 23 (4)
                  dhcp: false
            dns-resolver:
              config:
                server:
                  - 192.168.111.1 (5)
            routes:
              config:
                - destination: 0.0.0.0/0
                  next-hop-address: 192.168.111.1 (6)
                  next-hop-interface: eno1
                  table-id: 254
      EOF
    1 If a value is not specified for the rendezvousIP field, one address will be chosen from the static IP addresses specified in the networkConfig fields.
    2 The MAC address of an interface on the host, used to determine which host to apply the configuration to.
    3 The static IP address of the target bare metal host.
    4 The static IP address’s subnet prefix for the target bare metal host.
    5 The DNS server for the target bare metal host.
    6 Next hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface.
  2. Optional method: ZTP manifests

    The optional method of the ZTP custom resources comprises 6 custom resources; you can configure static IPs in the nmstateconfig.yaml file.

    apiVersion: agent-install.openshift.io/v1beta1
    kind: NMStateConfig
    metadata:
      name: master-0
      namespace: openshift-machine-api
      labels:
        cluster0-nmstate-label-name: cluster0-nmstate-label-value
    spec:
      config:
        interfaces:
          - name: eth0
            type: ethernet
            state: up
            mac-address: 52:54:01:aa:aa:a1
            ipv4:
              enabled: true
              address:
                - ip: 192.168.122.2 (1)
                  prefix-length: 23 (2)
              dhcp: false
        dns-resolver:
          config:
            server:
              - 192.168.122.1 (3)
        routes:
          config:
            - destination: 0.0.0.0/0
              next-hop-address: 192.168.122.1 (4)
              next-hop-interface: eth0
              table-id: 254
      interfaces:
        - name: eth0
          macAddress: 52:54:01:aa:aa:a1 (5)
    1 The static IP address of the target bare metal host.
    2 The static IP address’s subnet prefix for the target bare metal host.
    3 The DNS server for the target bare metal host.
    4 Next hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface.
    5 The MAC address of an interface on the host, used to determine which host to apply the configuration to.

The rendezvous IP is chosen from the static IP addresses specified in the config fields.

Example: Bonds and VLAN interface node network configuration

The following agent-config.yaml file is an example of a manifest for bond and VLAN interfaces.

  apiVersion: v1alpha1
  kind: AgentConfig
  rendezvousIP: 10.10.10.14
  hosts:
    - hostname: master0
      role: master
      interfaces:
       - name: enp0s4
         macAddress: 00:21:50:90:c0:10
       - name: enp0s5
         macAddress: 00:21:50:90:c0:20
      networkConfig:
        interfaces:
          - name: bond0.300 (1)
            type: vlan (2)
            state: up
            vlan:
              base-iface: bond0
              id: 300
            ipv4:
              enabled: true
              address:
                - ip: 10.10.10.14
                  prefix-length: 24
              dhcp: false
          - name: bond0 (1)
            type: bond (3)
            state: up
            mac-address: 00:21:50:90:c0:10 (4)
            ipv4:
              enabled: false
            ipv6:
              enabled: false
            link-aggregation:
              mode: active-backup (5)
              options:
                miimon: "150" (6)
              port:
               - enp0s4
               - enp0s5
        dns-resolver: (7)
          config:
            server:
              - 10.10.10.11
              - 10.10.10.12
        routes:
          config:
            - destination: 0.0.0.0/0
              next-hop-address: 10.10.10.10 (8)
              next-hop-interface: bond0.300 (9)
              table-id: 254
1 Name of the interface.
2 The type of interface. This example creates a VLAN.
3 The type of interface. This example creates a bond.
4 The mac address of the interface.
5 The mode attribute specifies the bonding mode.
6 Specifies the MII link monitoring frequency in milliseconds. This example inspects the bond link every 150 milliseconds.
7 Optional: Specifies the search and server settings for the DNS server.
8 Next hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface.
9 Next hop interface for the node traffic.

Sample install-config.yaml file for bare metal

You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.

apiVersion: v1
baseDomain: example.com (1)
compute: (2)
- name: worker
  replicas: 0 (3)
controlPlane: (2)
  name: master
  replicas: 1 (4)
metadata:
  name: sno-cluster (5)
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14 (6)
    hostPrefix: 23 (7)
  networkType: OVNKubernetes (8)
  serviceNetwork: (9)
  - 172.30.0.0/16
platform:
  none: {} (10)
fips: false (11)
pullSecret: '{"auths": ...}' (12)
sshKey: 'ssh-ed25519 AAAA...' (13)
1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
2 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Only one control plane pool is used.
3 This parameter controls the number of compute machines that the Agent-based installation waits to discover before triggering the installation process. It is the number of compute machines that must be booted with the generated ISO.

If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines.

4 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy.
5 The cluster name that you specified in your DNS records.
6 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic.

Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range.

7 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23, then each node is assigned a /23 subnet out of the given cidr, which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic.
8 The cluster network plugin to install. The supported values are OVNKubernetes (default value) and OpenShiftSDN.
9 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic.
10 You must set the platform to none for a single-node cluster. You can set the platform to either vsphere or baremetal for multi-node clusters.

If you set the platform to vsphere or baremetal, you can configure IP address endpoints for cluster nodes in three ways:

  • IPv4

  • IPv6

  • IPv4 and IPv6 in parallel (dual-stack)

Example of dual-stack networking
networking:
  clusterNetwork:
    - cidr: 172.21.0.0/16
      hostPrefix: 23
    - cidr: fd02::/48
      hostPrefix: 64
  machineNetwork:
    - cidr: 192.168.11.0/16
    - cidr: 2001:DB8::/32
  serviceNetwork:
    - 172.22.0.0/16
    - fd03::/112
  networkType: OVNKubernetes
platform:
  baremetal:
    apiVIPs:
    - 192.168.11.3
    - 2001:DB8::4
    ingressVIPs:
    - 192.168.11.4
    - 2001:DB8::5
11 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.

The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64, ppc64le, and s390x architectures.

12 This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
13 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS).

For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

Validation checks before agent ISO creation

The Agent-based Installer performs validation checks on user defined YAML files before the ISO is created. Once the validations are successful, the agent ISO is created.

install-config.yaml
  • baremetal, vsphere and none platforms are supported.

  • If none is used as a platform, the number of control plane replicas must be 1 and the total number of worker replicas must be 0.

  • The networkType parameter must be OVNKubernetes in the case of none platform.

  • apiVIPs and ingressVIPs parameters must be set for bare metal and vSphere platforms.

  • Some host-specific fields in the bare metal platform configuration that have equivalents in agent-config.yaml file are ignored. A warning message is logged if these fields are set.

agent-config.yaml
  • Each interface must have a defined MAC address. Additionally, all interfaces must have a different MAC address.

  • At least one interface must be defined for each host.

  • World Wide Name (WWN) vendor extensions are not supported in root device hints.

  • The role parameter in the host object must have a value of either master or worker.

ZTP manifests

agent-cluster-install.yaml
  • For IPv6, the only supported value for the networkType parameter is OVNKubernetes. The OpenshiftSDN value can be used only for IPv4.

cluster-image-set.yaml
  • The ReleaseImage parameter must match the release defined in the installer.

About root device hints

The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it.

Table 2. Subfields
Subfield Description

deviceName

A string containing a Linux device name like /dev/vda. The hint must match the actual value exactly.

hctl

A string containing a SCSI bus address like 0:0:0:0. The hint must match the actual value exactly.

model

A string containing a vendor-specific device identifier. The hint can be a substring of the actual value.

vendor

A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value.

serialNumber

A string containing the device serial number. The hint must match the actual value exactly.

minSizeGigabytes

An integer representing the minimum size of the device in gigabytes.

wwn

A string containing the unique storage identifier. The hint must match the actual value exactly. If you use the udevadm command to retrieve the wwn value, and the command outputs a value for ID_WWN_WITH_EXTENSION, then you must use this value to specify the wwn subfield.

rotational

A boolean indicating whether the device should be a rotating disk (true) or not (false).

Example usage
     - name: master-0
       role: master
       rootDeviceHints:
         deviceName: "/dev/sda"