This is a cache of https://docs.openshift.com/container-platform/4.6/architecture/index.html. It is a snapshot of the page at 2024-11-29T22:39:33.095+0000.
Architecture overview | Architecture | OpenShift Container Platform 4.6
×

OpenShift Container Platform is a cloud-based Kubernetes container platform. The foundation of OpenShift Container Platform is based on Kubernetes and therefore shares the same technology. To learn more about OpenShift Container Platform and Kubernetes, see product architecture.

About installation and updates

As a cluster administrator, you can use the OpenShift Container Platform installation program to install and deploy a cluster on:

  • Installer-provisioned infrastructure clusters

  • User-provisioned infrastructure clusters

About the control plane

The control plane manages the worker nodes and the pods in your cluster. You can configure nodes with the use of machine config pools (MCPs). MCPs are groups of machines, such as control plane components or user workloads, that are based on the resources that they handle. OpenShift Container Platform assigns different roles to hosts. These roles define the function of a machine in a cluster. The cluster contains definitions for the standard control plane and worker role types.

You can use Operators to package, deploy, and manage services on the control plane. Operators are important components in OpenShift Container Platform because they provide the following services:

  • Perform health checks

  • Provide ways to watch applications

  • Manage over-the-air updates

  • Ensures applications stay in the specified state

About containerized applications for developers

As a developer, you can use different tools, methods, and formats to develop your containerized application based on your unique requirements, for example:

  • Use various build-tool, base-image, and registry options to build a simple container application.

  • Use supporting components such as OperatorHub and templates to develop your application.

  • Package and deploy your application as an Operator.

You can also create a Kubernetes manifest and store it in a Git repository. Kubernetes works on basic units called pods. A pod is a single instance of a running process in your cluster. Pods can contain one or more containers. You can create a service by grouping a set of pods and their access policies. Services provide permanent internal IP addresses and host names for other applications to use as pods are created and destroyed. Kubernetes defines workloads based on the type of your application.

About Red Hat Enterprise Linux CoreOS (RHCOS) and Ignition

As a cluster administrator, you can perform the following Red Hat Enterprise Linux CoreOS (RHCOS) tasks:

  • Learn about the next generation of single-purpose container operating system technology.

  • Choose how to configure Red Hat Enterprise Linux CoreOS (RHCOS)

  • Choose how to deploy Red Hat Enterprise Linux CoreOS (RHCOS):

    • Installer-provisioned deployment

    • User-provisioned deployment

The OpenShift Container Platform installation program creates the Ignition configuration files that you need to deploy your cluster. Red Hat Enterprise Linux CoreOS (RHCOS) uses Ignition during the initial configuration to perform common disk tasks, such as partitioning, formatting, writing files, and configuring users. During the first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines.

You can learn how Ignition works, the process for a Red Hat Enterprise Linux CoreOS (RHCOS) machine in an OpenShift Container Platform cluster, view Ignition configuration files, and change Ignition configuration after an installation.

About CI/CD methodology

You can use the continuous integration/continuous delivery (CI/CD) methodology to automate all the stages of application development. You can also use GitOps methodology to create repeatable and predictable processes for managing and recreating OpenShift Container Platform clusters and applications.

About ArgoCD

You can use ArgoCD, which is a declarative, GitOps continuous delivery tool for cluster resources.

About admission plug-ins

You can use admission plug-ins to regulate how OpenShift Container Platform functions. After a resource request is authenticated and authorized, admission plug-ins intercept the resource request to the master API to validate resource requests and to ensure that scaling policies are adhered to. Admission plug-ins are used to enforce security policies, resource limitations, or configuration requirements.