This is a cache of https://docs.openshift.com/container-platform/4.15/edge_computing/ztp-advanced-install-ztp.html. It is a snapshot of the page at 2024-11-28T13:20:38.285+0000.
Advanced managed cluster configuration with SiteConfig resources | Edge computing | OpenShift Container Platform 4.15
×

You can use SiteConfig custom resources (CRs) to deploy custom functionality and configurations in your managed clusters at installation time.

Customizing extra installation manifests in the GitOps ZTP pipeline

You can define a set of extra manifests for inclusion in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline. These manifests are linked to the SiteConfig custom resources (CRs) and are applied to the cluster during installation. Including MachineConfig CRs at install time makes the installation process more efficient.

Prerequisites
  • Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.

Procedure
  1. Create a set of extra manifest CRs that the GitOps ZTP pipeline uses to customize the cluster installs.

  2. In your custom /siteconfig directory, create a subdirectory /custom-manifest for your extra manifests. The following example illustrates a sample /siteconfig with /custom-manifest folder:

    siteconfig
    ├── site1-sno-du.yaml
    ├── site2-standard-du.yaml
    ├── extra-manifest/
    └── custom-manifest
        └── 01-example-machine-config.yaml

    The subdirectory names /custom-manifest and /extra-manifest used throughout are example names only. There is no requirement to use these names and no restriction on how you name these subdirectories. In this example /extra-manifest refers to the Git subdirectory that stores the contents of /extra-manifest from the ztp-site-generate container.

  3. Add your custom extra manifest CRs to the siteconfig/custom-manifest directory.

  4. In your SiteConfig CR, enter the directory name in the extraManifests.searchPaths field, for example:

    clusters:
    - clusterName: "example-sno"
      networkType: "OVNKubernetes"
      extraManifests:
        searchPaths:
          - extra-manifest/ (1)
          - custom-manifest/ (2)
    1 Folder for manifests copied from the ztp-site-generate container.
    2 Folder for custom manifests.
  5. Save the SiteConfig, /extra-manifest, and /custom-manifest CRs, and push them to the site configuration repo.

During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the /custom-manifest directory to the default set of extra manifests stored in extra-manifest/.

As of version 4.14 extraManifestPath is subject to a deprecation warning.

While extraManifestPath is still supported, we recommend that you use extraManifests.searchPaths. If you define extraManifests.searchPaths in the SiteConfig file, the GitOps ZTP pipeline does not fetch manifests from the ztp-site-generate container during site installation.

If you define both extraManifestPath and extraManifests.searchPaths in the Siteconfig CR, the setting defined for extraManifests.searchPaths takes precedence.

It is strongly recommended that you extract the contents of /extra-manifest from the ztp-site-generate container and push it to the GIT repository.

Filtering custom resources using SiteConfig filters

By using filters, you can easily customize SiteConfig custom resources (CRs) to include or exclude other CRs for use in the installation phase of the GitOps Zero Touch Provisioning (ZTP) pipeline.

You can specify an inclusionDefault value of include or exclude for the SiteConfig CR, along with a list of the specific extraManifest RAN CRs that you want to include or exclude. Setting inclusionDefault to include makes the GitOps ZTP pipeline apply all the files in /source-crs/extra-manifest during installation. Setting inclusionDefault to exclude does the opposite.

You can exclude individual CRs from the /source-crs/extra-manifest folder that are otherwise included by default. The following example configures a custom single-node OpenShift SiteConfig CR to exclude the /source-crs/extra-manifest/03-sctp-machine-config-worker.yaml CR at installation time.

Some additional optional filtering scenarios are also described.

Prerequisites
  • You configured the hub cluster for generating the required installation and policy CRs.

  • You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.

Procedure
  1. To prevent the GitOps ZTP pipeline from applying the 03-sctp-machine-config-worker.yaml CR file, apply the following YAML in the SiteConfig CR:

    apiVersion: ran.openshift.io/v1
    kind: SiteConfig
    metadata:
      name: "site1-sno-du"
      namespace: "site1-sno-du"
    spec:
      baseDomain: "example.com"
      pullSecretRef:
        name: "assisted-deployment-pull-secret"
      clusterImageSetNameRef: "openshift-4.15"
      sshPublicKey: "<ssh_public_key>"
      clusters:
    - clusterName: "site1-sno-du"
      extraManifests:
        filter:
          exclude:
            - 03-sctp-machine-config-worker.yaml

    The GitOps ZTP pipeline skips the 03-sctp-machine-config-worker.yaml CR during installation. All other CRs in /source-crs/extra-manifest are applied.

  2. Save the SiteConfig CR and push the changes to the site configuration repository.

    The GitOps ZTP pipeline monitors and adjusts what CRs it applies based on the SiteConfig filter instructions.

  3. Optional: To prevent the GitOps ZTP pipeline from applying all the /source-crs/extra-manifest CRs during cluster installation, apply the following YAML in the SiteConfig CR:

    - clusterName: "site1-sno-du"
      extraManifests:
        filter:
          inclusionDefault: exclude
  4. Optional: To exclude all the /source-crs/extra-manifest RAN CRs and instead include a custom CR file during installation, edit the custom SiteConfig CR to set the custom manifests folder and the include file, for example:

    clusters:
    - clusterName: "site1-sno-du"
      extraManifestPath: "<custom_manifest_folder>" (1)
      extraManifests:
        filter:
          inclusionDefault: exclude  (2)
          include:
            - custom-sctp-machine-config-worker.yaml
    1 Replace <custom_manifest_folder> with the name of the folder that contains the custom installation CRs, for example, user-custom-manifest/.
    2 Set inclusionDefault to exclude to prevent the GitOps ZTP pipeline from applying the files in /source-crs/extra-manifest during installation.

    The following example illustrates the custom folder structure:

    siteconfig
      ├── site1-sno-du.yaml
      └── user-custom-manifest
            └── custom-sctp-machine-config-worker.yaml

Deleting a node by using the SiteConfig CR

By using a SiteConfig custom resource (CR), you can delete and reprovision a node. This method is more efficient than manually deleting the node.

Prerequisites
  • You have configured the hub cluster to generate the required installation and policy CRs.

  • You have created a Git repository in which you can manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as the source repository for the Argo CD application.

Procedure
  1. Update the SiteConfig CR to include the bmac.agent-install.openshift.io/remove-agent-and-node-on-delete=true annotation and push the changes to the Git repository:

    apiVersion: ran.openshift.io/v1
    kind: SiteConfig
    metadata:
      name: "cnfdf20"
      namespace: "cnfdf20"
    spec:
      clusters:
        nodes:
        - hostname: node6
          role: "worker"
          crAnnotations:
            add:
              BareMetalHost:
                bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: true
    # ...
  2. Verify that the BareMetalHost object is annotated by running the following command:

    oc get bmh -n <managed-cluster-namespace> <bmh-object> -ojsonpath='{.metadata}' | jq -r '.annotations["bmac.agent-install.openshift.io/remove-agent-and-node-on-delete"]'
    Example output
    true
  3. Suppress the generation of the BareMetalHost CR by updating the SiteConfig CR to include the crSuppression.BareMetalHost annotation:

    apiVersion: ran.openshift.io/v1
    kind: SiteConfig
    metadata:
      name: "cnfdf20"
      namespace: "cnfdf20"
    spec:
      clusters:
      - nodes:
        - hostName: node6
          role: "worker"
          crSuppression:
          - BareMetalHost
    # ...
  4. Push the changes to the Git repository and wait for deprovisioning to start. The status of the BareMetalHost CR should change to deprovisioning. Wait for the BareMetalHost to finish deprovisioning, and be fully deleted.

Verification
  1. Verify that the BareMetalHost and Agent CRs for the worker node have been deleted from the hub cluster by running the following commands:

    $ oc get bmh -n <cluster-ns>
    $ oc get agent -n <cluster-ns>
  2. Verify that the node record has been deleted from the spoke cluster by running the following command:

    $ oc get nodes

    If you are working with secrets, deleting a secret too early can cause an issue because ArgoCD needs the secret to complete resynchronization after deletion. Delete the secret only after the node cleanup, when the current ArgoCD synchronization is complete.

Next Steps

To reprovision a node, delete the changes previously added to the SiteConfig, push the changes to the Git repository, and wait for the synchronization to complete. This regenerates the BareMetalHost CR of the worker node and triggers the re-install of the node.