This is a cache of https://docs.okd.io/4.13/hosted_control_planes/index.html. It is a snapshot of the page at 2024-11-24T00:34:43.300+0000.
Hosted control planes overview | Hosted control planes | OKD 4.13
×

You can deploy OKD clusters by using two different control plane configurations: standalone or hosted control planes. The standalone configuration uses dedicated virtual machines or physical machines to host the control plane. With hosted control planes for OKD, you create control planes as pods on a hosting cluster without the need for dedicated virtual or physical machines for each control plane.

Introduction to hosted control planes (Technology Preview)

You can use hosted control planes for Red Hat OKD to reduce management costs, optimize cluster deployment time, and separate management and workload concerns so that you can focus on your applications.

You can enable hosted control planes as a Technology Preview feature by using the multicluster engine for Kubernetes operator version 2.0 or later on Amazon Web Services (AWS), bare metal by using the Agent provider, or OKD Virtualization.

Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Architecture of hosted control planes

OKD is often deployed in a coupled, or standalone, model, where a cluster consists of a control plane and a data plane. The control plane includes an API endpoint, a storage endpoint, a workload scheduler, and an actuator that ensures state. The data plane includes compute, storage, and networking where workloads and applications run.

The standalone control plane is hosted by a dedicated group of nodes, which can be physical or virtual, with a minimum number to ensure quorum. The network stack is shared. Administrator access to a cluster offers visibility into the cluster’s control plane, machine management APIs, and other components that contribute to the state of a cluster.

Although the standalone model works well, some situations require an architecture where the control plane and data plane are decoupled. In those cases, the data plane is on a separate network domain with a dedicated physical hosting environment. The control plane is hosted by using high-level primitives such as deployments and stateful sets that are native to Kubernetes. The control plane is treated as any other workload.

Diagram that compares the hosted control plane model against OpenShift with a coupled control plane and workers

Benefits of hosted control planes

With hosted control planes for OKD, you can pave the way for a true hybrid-cloud approach and enjoy several other benefits.

  • The security boundaries between management and workloads are stronger because the control plane is decoupled and hosted on a dedicated hosting service cluster. As a result, you are less likely to leak credentials for clusters to other users. Because infrastructure secret account management is also decoupled, cluster infrastructure administrators cannot accidentally delete control plane infrastructure.

  • With hosted control planes, you can run many control planes on fewer nodes. As a result, clusters are more affordable.

  • Because the control planes consist of pods that are launched on OKD, control planes start quickly. The same principles apply to control planes and workloads, such as monitoring, logging, and auto-scaling.

  • From an infrastructure perspective, you can push registries, HAProxy, cluster monitoring, storage nodes, and other infrastructure components to the tenant’s cloud provider account, isolating usage to the tenant.

  • From an operational perspective, multicluster management is more centralized, which results in fewer external factors that affect the cluster status and consistency. Site reliability engineers have a central place to debug issues and navigate to the cluster data plane, which can lead to shorter Time to Resolution (TTR) and greater productivity.

Relationship between hosted control planes, multicluster engine Operator, and RHACM

You can configure hosted control planes by using the multicluster engine for Kubernetes Operator. The multicluster engine is an integral part of Red Hat Advanced Cluster Management (RHACM) and is enabled by default with RHACM. The multicluster engine Operator cluster lifecycle defines the process of creating, importing, managing, and destroying Kubernetes clusters across various infrastructure cloud providers, private clouds, and on-premises data centers.

The multicluster engine Operator is the cluster lifecycle Operator that provides cluster management capabilities for OKD and RHACM hub clusters. The multicluster engine Operator enhances cluster fleet management and supports OKD cluster lifecycle management across clouds and data centers.

Cluster life cycle and foundation
Figure 1. Cluster life cycle and foundation

You can use the multicluster engine Operator with OKD as a standalone cluster manager or as part of a RHACM hub cluster.

A management cluster is also known as the hosting cluster.

You can deploy OKD clusters by using two different control plane configurations: standalone or hosted control planes. The standalone configuration uses dedicated virtual machines or physical machines to host the control plane. With hosted control planes for OKD, you create control planes as pods on a management cluster without the need for dedicated virtual or physical machines for each control plane.

RHACM and the multicluster engine Operator introduction diagram
Figure 2. RHACM and the multicluster engine Operator introduction diagram

Versioning for hosted control planes

With each major, minor, or patch version release of OKD, two components of hosted control planes are released:

  • HyperShift Operator

  • Command-line interface (CLI)

The HyperShift Operator manages the lifecycle of hosted clusters that are represented by HostedCluster API resources. The HyperShift Operator is released with each OKD release. After the HyperShift Operator is installed, it creates a config map called supported-versions in the HyperShift namespace, as shown in the following example. The config map describes the HostedCluster versions that can be deployed.

    apiVersion: v1
    data:
      supported-versions: '{"versions":["4.13","4.12","4.11"]}'
    kind: ConfigMap
    metadata:
      labels:
        hypershift.openshift.io/supported-versions: "true"
      name: supported-versions
      namespace: hypershift

The CLI is a helper utility for development purposes. The CLI is released as part of any HyperShift Operator release. No compatibility policies are guaranteed.

The API, hypershift.openshift.io, provides a way to create and manage lightweight, flexible, heterogeneous OKD clusters at scale. The API exposes two user-facing resources: HostedCluster and NodePool. A HostedCluster resource encapsulates the control plane and common data plane configuration. When you create a HostedCluster resource, you have a fully functional control plane with no attached nodes. A NodePool resource is a scalable set of worker nodes that is attached to a HostedCluster resource.

The API version policy generally aligns with the policy for Kubernetes API versioning.