This is a cache of https://docs.openshift.com/container-platform/4.14/scalability_and_performance/ztp_far_edge/ztp-preparing-the-hub-cluster.html. It is a snapshot of the page at 2024-11-24T15:38:48.723+0000.
Preparing the hub cluster for ZTP - Clusters at the network far edge | Scalability and performance | OpenShift Container Platform 4.14
×

To use RHACM in a disconnected environment, create a mirror registry that mirrors the OpenShift Container Platform release images and Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster. You can also use a disconnected mirror host to serve the RHCOS ISO and RootFS disk images that are used to provision the bare-metal hosts.

Telco RAN DU 4.14 validated software components

The Red Hat telco RAN DU 4.14 solution has been validated using the following Red Hat software products for OpenShift Container Platform managed clusters and hub clusters.

Table 1. Telco RAN DU managed cluster validated software components
Component Software version

Managed cluster version

4.14

Cluster Logging Operator

5.7

Local Storage Operator

4.14

PTP Operator

4.14

SRIOV Operator

4.14

Node Tuning Operator

4.14

Logging Operator

4.14

SRIOV-FEC Operator

2.7

Table 2. Hub cluster validated software components
Component Software version

Hub cluster version

4.14

GitOps ZTP plugin

4.14

Red Hat Advanced Cluster Management (RHACM)

2.9, 2.10

Red Hat OpenShift GitOps

1.9, 1.10

Topology Aware Lifecycle Manager (TALM)

4.14

Recommended hub cluster specifications and managed cluster limits for GitOps ZTP

With GitOps Zero Touch Provisioning (ZTP), you can manage thousands of clusters in geographically dispersed regions and networks. The Red Hat Performance and Scale lab successfully created and managed 3500 virtual single-node OpenShift clusters with a reduced DU profile from a single Red Hat Advanced Cluster Management (RHACM) hub cluster in a lab environment.

In real-world situations, the scaling limits for the number of clusters that you can manage will vary depending on various factors affecting the hub cluster. For example:

Hub cluster resources

Available hub cluster host resources (CPU, memory, storage) are an important factor in determining how many clusters the hub cluster can manage. The more resources allocated to the hub cluster, the more managed clusters it can accommodate.

Hub cluster storage

The hub cluster host storage IOPS rating and whether the hub cluster hosts use NVMe storage can affect hub cluster performance and the number of clusters it can manage.

Network bandwidth and latency

Slow or high-latency network connections between the hub cluster and managed clusters can impact how the hub cluster manages multiple clusters.

Managed cluster size and complexity

The size and complexity of the managed clusters also affects the capacity of the hub cluster. Larger managed clusters with more nodes, namespaces, and resources require additional processing and management resources. Similarly, clusters with complex configurations such as the RAN DU profile or diverse workloads can require more resources from the hub cluster.

Number of managed policies

The number of policies managed by the hub cluster scaled over the number of managed clusters bound to those policies is an important factor that determines how many clusters can be managed.

Monitoring and management workloads

RHACM continuously monitors and manages the managed clusters. The number and complexity of monitoring and management workloads running on the hub cluster can affect its capacity. Intensive monitoring or frequent reconciliation operations can require additional resources, potentially limiting the number of manageable clusters.

RHACM version and configuration

Different versions of RHACM can have varying performance characteristics and resource requirements. Additionally, the configuration settings of RHACM, such as the number of concurrent reconciliations or the frequency of health checks, can affect the managed cluster capacity of the hub cluster.

Use the following representative configuration and network specifications to develop your own Hub cluster and network specifications.

The following guidelines are based on internal lab benchmark testing only and do not represent complete bare-metal host specifications.

Table 3. Representative three-node hub cluster machine specifications
Requirement Description

OpenShift Container Platform

version 4.13

RHACM

version 2.7

Topology Aware Lifecycle Manager (TALM)

version 4.13

Server hardware

3 x Dell PowerEdge R650 rack servers

NVMe hard disks

  • 50 GB disk for /var/lib/etcd

  • 2.9 TB disk for /var/lib/containers

SSD hard disks

  • 1 SSD split into 15 200GB thin-provisioned logical volumes provisioned as PV CRs

  • 1 SSD serving as an extra large PV resource

Number of applied DU profile policies

5

The following network specifications are representative of a typical real-world RAN network and were applied to the scale lab environment during testing.

Table 4. Simulated lab environment network specifications
Specification Description

Round-trip time (RTT) latency

50 ms

Packet loss

0.02% packet loss

Network bandwidth limit

20 Mbps

Installing GitOps ZTP in a disconnected environment

Use Red Hat Advanced Cluster Management (RHACM), Red Hat OpenShift GitOps, and Topology Aware Lifecycle Manager (TALM) on the hub cluster in the disconnected environment to manage the deployment of multiple managed clusters.

Prerequisites
  • You have installed the OpenShift Container Platform CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

  • You have configured a disconnected mirror registry for use in the cluster.

    The disconnected mirror registry that you create must contain a version of TALM backup and pre-cache images that matches the version of TALM running in the hub cluster. The spoke cluster must be able to resolve these images in the disconnected mirror registry.

Procedure

Adding RHCOS ISO and RootFS images to the disconnected mirror host

Before you begin installing clusters in the disconnected environment with Red Hat Advanced Cluster Management (RHACM), you must first host Red Hat Enterprise Linux CoreOS (RHCOS) images for it to use. Use a disconnected mirror to host the RHCOS images.

Prerequisites
  • Deploy and configure an HTTP server to host the RHCOS image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create.

The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. You require ISO and RootFS images to install RHCOS on the hosts. RHCOS QCOW2 images are not supported for this installation type.

Procedure
  1. Log in to the mirror host.

  2. Obtain the RHCOS ISO and RootFS images from mirror.openshift.com, for example:

    1. Export the required image names and OpenShift Container Platform version as environment variables:

      $ export ISO_IMAGE_NAME=<iso_image_name> (1)
      $ export ROOTFS_IMAGE_NAME=<rootfs_image_name> (2)
      $ export OCP_VERSION=<ocp_version> (3)
      1 ISO image name, for example, rhcos-4.14.1-x86_64-live.x86_64.iso
      2 RootFS image name, for example, rhcos-4.14.1-x86_64-live-rootfs.x86_64.img
      3 OpenShift Container Platform version, for example, 4.14.1
    2. Download the required images:

      $ sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.14/${OCP_VERSION}/${ISO_IMAGE_NAME} -O /var/www/html/${ISO_IMAGE_NAME}
      $ sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.14/${OCP_VERSION}/${ROOTFS_IMAGE_NAME} -O /var/www/html/${ROOTFS_IMAGE_NAME}
Verification steps
  • Verify that the images downloaded successfully and are being served on the disconnected mirror host, for example:

    $ wget http://$(hostname)/${ISO_IMAGE_NAME}
    Example output
    Saving to: rhcos-4.14.1-x86_64-live.x86_64.iso
    rhcos-4.14.1-x86_64-live.x86_64.iso-  11%[====>    ]  10.01M  4.71MB/s

Enabling the assisted service

Red Hat Advanced Cluster Management (RHACM) uses the assisted service to deploy OpenShift Container Platform clusters. The assisted service is deployed automatically when you enable the MultiClusterHub Operator on Red Hat Advanced Cluster Management (RHACM). After that, you need to configure the Provisioning resource to watch all namespaces and to update the AgentServiceConfig custom resource (CR) with references to the ISO and RootFS images that are hosted on the mirror registry HTTP server.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in to the hub cluster as a user with cluster-admin privileges.

  • You have RHACM with MultiClusterHub enabled.

Procedure
  1. Enable the Provisioning resource to watch all namespaces and configure mirrors for disconnected environments. For more information, see Enabling the central infrastructure management service.

  2. Update the AgentServiceConfig CR by running the following command:

    $ oc edit AgentServiceConfig
  3. Add the following entry to the items.spec.osImages field in the CR:

    - cpuArchitecture: x86_64
        openshiftVersion: "4.14"
        rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img
        url: https://<mirror-registry>/<path>/rhcos-live.x86_64.iso

    where:

    <host>

    Is the fully qualified domain name (FQDN) for the target mirror registry HTTP server.

    <path>

    Is the path to the image on the target mirror registry.

    Save and quit the editor to apply the changes.

Configuring the hub cluster to use a disconnected mirror registry

You can configure the hub cluster to use a disconnected mirror registry for a disconnected environment.

Prerequisites
  • You have a disconnected hub cluster installation with Red Hat Advanced Cluster Management (RHACM) 2.8 installed.

  • You have hosted the rootfs and iso images on an HTTP server. See the Additional resources section for guidance about Mirroring the OpenShift Container Platform image repository.

If you enable TLS for the HTTP server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and managed clusters and the HTTP server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported.

Procedure
  1. Create a ConfigMap containing the mirror registry config:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: assisted-installer-mirror-config
      namespace: multicluster-engine (1)
      labels:
        app: assisted-service
    data:
      ca-bundle.crt: | (2)
        -----BEGIN CERTIFICATE-----
        <certificate_contents>
        -----END CERTIFICATE-----
    
      registries.conf: | (3)
        unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
    
        [[registry]]
           prefix = ""
           location = "quay.io/example-repository" (4)
           mirror-by-digest-only = true
    
           [[registry.mirror]]
           location = "mirror1.registry.corp.com:5000/example-repository" (5)
    1 The ConfigMap namespace must be set to multicluster-engine.
    2 The mirror registry’s certificate that is used when creating the mirror registry.
    3 The configuration file for the mirror registry. The mirror registry configuration adds mirror information to the /etc/containers/registries.conf file in the discovery image. The mirror information is stored in the imageContentSources section of the install-config.yaml file when the information is passed to the installation program. The Assisted Service pod that runs on the hub cluster fetches the container images from the configured mirror registry.
    4 The URL of the mirror registry. You must use the URL from the imageContentSources section by running the oc adm release mirror command when you configure the mirror registry. For more information, see the Mirroring the OpenShift Container Platform image repository section.
    5 The registries defined in the registries.conf file must be scoped by repository, not by registry. In this example, both the quay.io/example-repository and the mirror1.registry.corp.com:5000/example-repository repositories are scoped by the example-repository repository.

    This updates mirrorRegistryRef in the AgentServiceConfig custom resource, as shown below:

    Example output
    apiVersion: agent-install.openshift.io/v1beta1
    kind: AgentServiceConfig
    metadata:
      name: agent
      namespace: multicluster-engine (1)
    spec:
      databaseStorage:
        volumeName: <db_pv_name>
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: <db_storage_size>
      filesystemStorage:
        volumeName: <fs_pv_name>
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: <fs_storage_size>
      mirrorRegistryRef:
        name: assisted-installer-mirror-config (2)
      osImages:
        - openshiftVersion: <ocp_version>
          url: <iso_url> (3)
    1 Set the AgentServiceConfig namespace to multicluster-engine to match the ConfigMap namespace
    2 Set mirrorRegistryRef.name to match the definition specified in the related ConfigMap CR
    3 Set the URL for the ISO hosted on the httpd server

A valid NTP server is required during cluster installation. Ensure that a suitable NTP server is available and can be reached from the installed clusters through the disconnected network.

Configuring the hub cluster to use unauthenticated registries

You can configure the hub cluster to use unauthenticated registries. Unauthenticated registries does not require authentication to access and download images.

Prerequisites
  • You have installed and configured a hub cluster and installed Red Hat Advanced Cluster Management (RHACM) on the hub cluster.

  • You have installed the OpenShift Container Platform CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

  • You have configured an unauthenticated registry for use with the hub cluster.

Procedure
  1. Update the AgentServiceConfig custom resource (CR) by running the following command:

    $ oc edit AgentServiceConfig agent
  2. Add the unauthenticatedRegistries field in the CR:

    apiVersion: agent-install.openshift.io/v1beta1
    kind: AgentServiceConfig
    metadata:
      name: agent
    spec:
      unauthenticatedRegistries:
      - example.registry.com
      - example.registry2.com
      ...

    Unauthenticated registries are listed under spec.unauthenticatedRegistries in the AgentServiceConfig resource. Any registry on this list is not required to have an entry in the pull secret used for the spoke cluster installation. assisted-service validates the pull secret by making sure it contains the authentication information for every image registry used for installation.

Mirror registries are automatically added to the ignore list and do not need to be added under spec.unauthenticatedRegistries. Specifying the PUBLIC_CONTAINER_REGISTRIES environment variable in the ConfigMap overrides the default values with the specified value. The PUBLIC_CONTAINER_REGISTRIES defaults are quay.io and registry.svc.ci.openshift.org.

Verification

Verify that you can access the newly added registry from the hub cluster by running the following commands:

  1. Open a debug shell prompt to the hub cluster:

    $ oc debug node/<node_name>
  2. Test access to the unauthenticated registry by running the following command:

    sh-4.4# podman login -u kubeadmin -p $(oc whoami -t) <unauthenticated_registry>

    where:

    <unauthenticated_registry>

    Is the new registry, for example, unauthenticated-image-registry.openshift-image-registry.svc:5000.

    Example output
    Login Succeeded!

Configuring the hub cluster with ArgoCD

You can configure the hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CRs) for each site with GitOps Zero Touch Provisioning (ZTP).

Red Hat Advanced Cluster Management (RHACM) uses SiteConfig CRs to generate the Day 1 managed cluster installation CRs for ArgoCD. Each ArgoCD application can manage a maximum of 300 SiteConfig CRs.

Prerequisites
  • You have a OpenShift Container Platform hub cluster with Red Hat Advanced Cluster Management (RHACM) and Red Hat OpenShift GitOps installed.

  • You have extracted the reference deployment from the GitOps ZTP plugin container as described in the "Preparing the GitOps ZTP site configuration repository" section. Extracting the reference deployment creates the out/argocd/deployment directory referenced in the following procedure.

Procedure
  1. Prepare the ArgoCD pipeline configuration:

    1. Create a Git repository with the directory structure similar to the example directory. For more information, see "Preparing the GitOps ZTP site configuration repository".

    2. Configure access to the repository using the ArgoCD UI. Under Settings configure the following:

      • Repositories - Add the connection information. The URL must end in .git, for example, https://repo.example.com/repo.git and credentials.

      • Certificates - Add the public certificate for the repository, if needed.

    3. Modify the two ArgoCD applications, out/argocd/deployment/clusters-app.yaml and out/argocd/deployment/policies-app.yaml, based on your Git repository:

      • Update the URL to point to the Git repository. The URL ends with .git, for example, https://repo.example.com/repo.git.

      • The targetRevision indicates which Git repository branch to monitor.

      • path specifies the path to the SiteConfig and PolicyGenTemplate CRs, respectively.

  1. To install the GitOps ZTP plugin, patch the ArgoCD instance in the hub cluster with the relevant multicluster engine (MCE) subscription image. Customize the patch file that you previously extracted into the out/argocd/deployment/ directory for your environment.

    1. Select the multicluster-operators-subscription image that matches your RHACM version.

      Table 5. multicluster-operators-subscription image versions
      OpenShift Container Platform version RHACM version MCE version MCE RHEL version MCE image

      4.14, 4.15, 4.16

      2.8, 2.9

      2.8, 2.9

      RHEL 8

      registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v2.8

      registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v2.9

      4.14, 4.15, 4.16

      2.10

      2.10

      RHEL 9

      registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10

      The version of the multicluster-operators-subscription image should match the RHACM version. Beginning with the MCE 2.10 release, RHEL 9 is the base image for multicluster-operators-subscription images.

    2. Add the following configuration to the out/argocd/deployment/argocd-openshift-gitops-patch.json file:

      {
        "args": [
          "-c",
          "mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator" (1)
        ],
        "command": [
          "/bin/bash"
        ],
        "image": "registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10",  (2) (3)
        "name": "policy-generator-install",
        "imagePullPolicy": "Always",
        "volumeMounts": [
          {
            "mountPath": "/.config",
            "name": "kustomize"
          }
        ]
      }
      1 Optional: For RHEL 9 images, copy the required universal executable in the /policy-generator/PolicyGenerator-not-fips-compliant folder for the ArgoCD version.
      2 Match the multicluster-operators-subscription image to the RHACM version.
      3 In disconnected environments, replace the URL for the multicluster-operators-subscription image with the disconnected registry equivalent for your environment.
    3. Patch the ArgoCD instance. Run the following command:

      $ oc patch argocd openshift-gitops \
      -n openshift-gitops --type=merge \
      --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json
  2. In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon feature by default. Apply the following patch to disable the cluster-proxy-addon feature and remove the relevant hub cluster and managed pods that are responsible for this add-on. Run the following command:

    $ oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json
  3. Apply the pipeline configuration to your hub cluster by running the following command:

    $ oc apply -k out/argocd/deployment

Preparing the GitOps ZTP site configuration repository

Before you can use the GitOps Zero Touch Provisioning (ZTP) pipeline, you need to prepare the Git repository to host the site configuration data.

Prerequisites
  • You have configured the hub cluster GitOps applications for generating the required installation and policy custom resources (CRs).

  • You have deployed the managed clusters using GitOps ZTP.

Procedure
  1. Create a directory structure with separate paths for the SiteConfig and PolicyGenTemplate CRs.

    Keep SiteConfig and PolicyGenTemplate CRs in separate directories. Both the SiteConfig and PolicyGenTemplate directories must contain a kustomization.yaml file that explicitly includes the files in that directory.

  2. Export the argocd directory from the ztp-site-generate container image using the following commands:

    $ podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14
    $ mkdir -p ./out
    $ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.14 extract /home/ztp --tar | tar x -C ./out
  3. Check that the out directory contains the following subdirectories:

    • out/extra-manifest contains the source CR files that SiteConfig uses to generate extra manifest configMap.

    • out/source-crs contains the source CR files that PolicyGenTemplate uses to generate the Red Hat Advanced Cluster Management (RHACM) policies.

    • out/argocd/deployment contains patches and YAML files to apply on the hub cluster for use in the next step of this procedure.

    • out/argocd/example contains the examples for SiteConfig and PolicyGenTemplate files that represent the recommended configuration.

  4. Copy the out/source-crs folder and contents to the PolicyGentemplate directory.

  5. The out/extra-manifests directory contains the reference manifests for a RAN DU cluster. Copy the out/extra-manifests directory into the SiteConfig folder. This directory should contain CRs from the ztp-site-generate container only. Do not add user-provided CRs here. If you want to work with user-provided CRs you must create another directory for that content. For example:

    example/
      ├── policygentemplates
      │   ├── kustomization.yaml
      │   └── source-crs/
      └── siteconfig
            ├── extra-manifests
            └── kustomization.yaml
  6. Commit the directory structure and the kustomization.yaml files and push to your Git repository. The initial push to Git should include the kustomization.yaml files.

You can use the directory structure under out/argocd/example as a reference for the structure and content of your Git repository. That structure includes SiteConfig and PolicyGenTemplate reference CRs for single-node, three-node, and standard clusters. Remove references to cluster types that you are not using.

For all cluster types, you must:

  • Add the source-crs subdirectory to the policygentemplate directory.

  • Add the extra-manifests directory to the siteconfig directory.

The following example describes a set of CRs for a network of single-node clusters:

example/
  ├── policygentemplates
  │   ├── common-ranGen.yaml
  │   ├── example-sno-site.yaml
  │   ├── group-du-sno-ranGen.yaml
  │   ├── group-du-sno-validator-ranGen.yaml
  │   ├── kustomization.yaml
  │   ├── source-crs/
  │   └── ns.yaml
  └── siteconfig
        ├── example-sno.yaml
        ├── extra-manifests/ (1)
        ├── custom-manifests/ (2)
        ├── KlusterletAddonConfigOverride.yaml
        └── kustomization.yaml
1 Contains reference manifests from the ztp-container.
2 Contains custom manifests.

Preparing the GitOps ZTP site configuration repository for version independence

You can use GitOps ZTP to manage source custom resources (CRs) for managed clusters that are running different versions of OpenShift Container Platform. This means that the version of OpenShift Container Platform running on the hub cluster can be independent of the version running on the managed clusters.

Procedure
  1. Create a directory structure with separate paths for the SiteConfig and PolicyGenTemplate CRs.

  2. Within the PolicyGenTemplate directory, create a directory for each OpenShift Container Platform version you want to make available. For each version, create the following resources:

    • kustomization.yaml file that explicitly includes the files in that directory

    • source-crs directory to contain reference CR configuration files from the ztp-site-generate container

      If you want to work with user-provided CRs, you must create a separate directory for them.

  3. In the /siteconfig directory, create a subdirectory for each OpenShift Container Platform version you want to make available. For each version, create at least one directory for reference CRs to be copied from the container. There is no restriction on the naming of directories or on the number of reference directories. If you want to work with custom manifests, you must create a separate directory for them.

    The following example describes a structure using user-provided manifests and CRs for different versions of OpenShift Container Platform:

    ├── policygentemplates
    │   ├── kustomization.yaml (1)
    │   ├── version_4.13 (2)
    │   │   ├── common-ranGen.yaml
    │   │   ├── group-du-sno-ranGen.yaml
    │   │   ├── group-du-sno-validator-ranGen.yaml
    │   │   ├── helix56-v413.yaml
    │   │   ├── kustomization.yaml (3)
    │   │   ├── ns.yaml
    │   │   └── source-crs/ (4)
    │   │      └── reference-crs/ (5)
    │   │      └── custom-crs/ (6)
    │   └── version_4.14 (2)
    │       ├── common-ranGen.yaml
    │       ├── group-du-sno-ranGen.yaml
    │       ├── group-du-sno-validator-ranGen.yaml
    │       ├── helix56-v414.yaml
    │       ├── kustomization.yaml (3)
    │       ├── ns.yaml
    │       └── source-crs/ (4)
    │         └── reference-crs/ (5)
    │         └── custom-crs/ (6)
    └── siteconfig
        ├── kustomization.yaml
        ├── version_4.13
        │   ├── helix56-v413.yaml
        │   ├── kustomization.yaml
        │   ├── extra-manifest/ (7)
        │   └── custom-manifest/ (8)
        └── version_4.14
            ├── helix57-v414.yaml
            ├── kustomization.yaml
            ├── extra-manifest/ (7)
            └── custom-manifest/ (8)
    1 Create a top-level kustomization YAML file.
    2 Create the version-specific directories within the custom /policygentemplates directory.
    3 Create a kustomization.yaml file for each version.
    4 Create a source-crs directory for each version to contain reference CRs from the ztp-site-generate container.
    5 Create the reference-crs directory for policy CRs that are extracted from the ZTP container.
    6 Optional: Create a custom-crs directory for user-provided CRs.
    7 Create a directory within the custom /siteconfig directory to contain extra manifests from the ztp-site-generate container.
    8 Create a folder to hold user-provided manifests.

    In the previous example, each version subdirectory in the custom /siteconfig directory contains two further subdirectories, one containing the reference manifests copied from the container, the other for custom manifests that you provide. The names assigned to those directories are examples. If you use user-provided CRs, the last directory listed under extraManifests.searchPaths in the SiteConfig CR must be the directory containing user-provided CRs.

  4. Edit the SiteConfig CR to include the search paths of any directories you have created. The first directory that is listed under extraManifests.searchPaths must be the directory containing the reference manifests. Consider the order in which the directories are listed. In cases where directories contain files with the same name, the file in the final directory takes precedence.

    Example SiteConfig CR
    extraManifests:
        searchPaths:
        - extra-manifest/ (1)
        - custom-manifest/ (2)
    1 The directory containing the reference manifests must be listed first under extraManifests.searchPaths.
    2 If you are using user-provided CRs, the last directory listed under extraManifests.searchPaths in the SiteConfig CR must be the directory containing those user-provided CRs.
  5. Edit the top-level kustomization.yaml file to control which OpenShift Container Platform versions are active. The following is an example of a kustomization.yaml file at the top level:

    resources:
    - version_4.13 (1)
    #- version_4.14 (2)
    1 Activate version 4.13.
    2 Use comments to deactivate a version.