Welcome to the official OpenShift Container Platform 4.12 documentation, where you can learn about OpenShift Container Platform and start exploring its features.
To navigate the OpenShift Container Platform 4.12 documentation, you can use one of the following methods:
Use the left navigation bar to browse the documentation.
Select the task that interests you from the contents of this Welcome page.
Start with Architecture and Security and compliance. Then, see the release notes.
Explore these OpenShift Container Platform installation tasks.
OpenShift Container Platform installation overview: You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The OpenShift Container Platform installation program provides the flexibility to deploy OpenShift Container Platform on a range of different platforms.
Install a cluster on Alibaba: You can install OpenShift Container Platform on Alibaba Cloud on installer-provisioned infrastructure. This is currently a Technology Preview feature only.
Install a cluster on AWS: You have many installation options when you deploy a cluster on Amazon Web Services (AWS). You can deploy clusters with default settings or custom AWS settings. You can also deploy a cluster on AWS infrastructure that you provisioned yourself. You can modify the provided AWS CloudFormation templates to meet your needs.
Install a cluster on Azure: You can deploy clusters with default settings, custom Azure settings, or custom networking settings in Microsoft Azure. You can also provision OpenShift Container Platform into an Azure Virtual Network or use Azure Resource Manager Templates to provision your own infrastructure.
Install a cluster on Azure Stack Hub: You can install OpenShift Container Platform on Azure Stack Hub on installer-provisioned infrastructure.
Install a cluster on GCP: You can deploy clusters with default settings or custom GCP settings on Google Cloud Platform (GCP). You can also perform a GCP installation where you provision your own infrastructure.
Install a cluster on IBM Cloud VPC: You can install OpenShift Container Platform on IBM Cloud VPC on installer-provisioned infrastructure.
Install a cluster on IBM Power: You can install OpenShift Container Platform on IBM Power on user-provisioned infrastructure.
Install a cluster on VMware vSphere: You can install OpenShift Container Platform on supported versions of vSphere.
Install a cluster on VMware Cloud: You can install OpenShift Container Platform on supported versions of VMware Cloud (VMC) on AWS.
Install a cluster with z/VM on IBM Z and IBM® LinuxONE: You can install OpenShift Container Platform with z/VM on IBM Z and IBM® LinuxONE on user-provisioned infrastructure.
Install a cluster with RHEL KVM on IBM Z and IBM® LinuxONE: You can install OpenShift Container Platform with RHEL KVM on IBM Z and IBM® LinuxONE on user-provisioned infrastructure.
Install an installer-provisioned cluster on bare metal: You can install OpenShift Container Platform on bare metal with an installer-provisioned architecture.
Install a user-provisioned cluster on bare metal: If none of the available platform and cloud provider deployment options meet your needs, you can install OpenShift Container Platform on user-provisioned bare metal infrastructure.
Install a cluster on Red Hat OpenStack Platform (RHOSP): You can install a cluster on RHOSP with customizations, with network customizations, or on a restricted network on installer-provisioned infrastructure.
You can install a cluster on RHOSP with customizations or with network customizations on user-provisioned infrastructure.
Install a cluster on Red Hat Virtualization (RHV): You can deploy clusters on Red Hat Virtualization (RHV) with a quick install or an install with customizations.
Install a cluster in a restricted network: If your cluster that uses user-provisioned infrastructure on AWS, GCP, vSphere, IBM Z and IBM® LinuxONE with z/VM, IBM Z and IBM® LinuxONE with RHEL KVM, IBM Power, or bare metal does not have full access to the internet, then mirror the OpenShift Container Platform installation images using one of the following methods and install a cluster in a restricted network.
Install a cluster in an existing network: If you use an existing Virtual Private Cloud (VPC) in AWS or GCP or an existing VNet on Azure, you can install a cluster.
Install a private cluster: If your cluster does not require external internet access, you can install a private cluster on AWS, Azure, GCP, or IBM Cloud VPC Internet access is still required to access the cloud APIs and installation media.
Check installation logs: Access installation logs to evaluate issues that occur during OpenShift Container Platform installation.
Access OpenShift Container Platform: Use credentials output at the end of the installation process to log in to the OpenShift Container Platform cluster from the command line or web console.
Install Red Hat OpenShift Data Foundation: You can install Red Hat OpenShift Data Foundation as an Operator to provide highly integrated and simplified persistent storage management for containers.
Install a cluster on Nutanix: You can install a cluster on your Nutanix instance that uses installer-provisioned infrastructure. This type of installation lets you use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains.
Red Hat Enterprise Linux CoreOS (RHCOS) image layering allows you to add new images on top of the base RHCOS image. This layering does not modify the base RHCOS image. Instead, it creates a custom layered image that includes all RHCOS functionality and adds additional functionality to specific nodes in the cluster.
Develop and deploy containerized applications with OpenShift Container Platform. OpenShift Container Platform is a platform for developing and deploying containerized applications. OpenShift Container Platform documentation helps you:
Understand OpenShift Container Platform development: Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators.
Work with projects: Create projects from the OpenShift Container Platform web console or OpenShift CLI (oc
) to organize and share the software you develop.
Use the Developer perspective in the OpenShift Container Platform web console to create and deploy applications.
Use the Topology view to see your applications, monitor status, connect and group components, and modify your code base.
Connect your workloads to backing services: The Service Binding Operator enables application developers to easily bind workloads with Operator-managed backing services by automatically collecting and sharing binding data with the workloads. The Service Binding Operator improves the development lifecycle with a consistent and declarative service binding method that prevents discrepancies in cluster environments.
Use the developer CLI tool (odo
) :
The odo
CLI tool lets developers create single or multi-component applications easily and automates deployment, build, and service route configurations. It abstracts complex Kubernetes and OpenShift Container Platform concepts, allowing you to focus on developing your applications.
Create CI/CD Pipelines: Pipelines are serverless, cloud-native, continuous integration and continuous deployment systems that run in isolated containers. Pipelines use standard Tekton custom resources to automate deployments and are designed for decentralized teams that work on microservice-based architecture.
Manage your infrastructure and application configurations: GitOps is a declarative way to implement continuous deployment for cloud native applications. GitOps defines infrastructure and application definitions as code. GitOps uses this code to manage multiple workspaces and clusters to simplify the creation of infrastructure and application configurations. GitOps also handles and automates complex deployments at a fast pace, which saves time during deployment and release cycles.
Deploy helm charts: helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. helm uses a packaging format called charts. A helm chart is a collection of files that describes the OpenShift Container Platform resources.
Understand image builds: Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials (from places like Git repositories, local binary inputs, and external artifacts). Then, follow examples of build types from basic builds to advanced builds.
Create container images: A container image is the most basic building block in OpenShift Container Platform (and Kubernetes) applications. Defining image streams lets you gather multiple versions of an image in one place as you continue its development. S2I containers let you insert your source code into a base container that is set up to run code of a particular type, such as Ruby, Node.js, or Python.
Create deployments: Use Deployment
and DeploymentConfig
objects to exert fine-grained management over applications.
Manage deployments using the Workloads page or OpenShift CLI (oc
). Learn rolling, recreate, and custom deployment strategies.
Create templates: Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built.
Understand Operators: Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.12. Learn about the Operator Framework and how to deploy applications using installed Operators into your projects.
Develop Operators: Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.12. Learn the workflow for building, testing, and deploying Operators. Then, create your own Operators based on Ansible or helm, or configure built-in Prometheus monitoring using the Operator SDK.
REST API reference: Learn about OpenShift Container Platform application programming interface endpoints.
As a cluster administrator for OpenShift Container Platform, this documentation helps you:
Understand OpenShift Container Platform management: Learn about components of the OpenShift Container Platform 4.12 control plane. See how OpenShift Container Platform control plane and compute nodes are managed and updated through the Machine API and Operators.
Enable cluster capabilities that were disabled prior to installation Cluster administrators can enable cluster capabilities that were disabled prior to installation. For more information, see Enabling cluster capabilities.
Manage machines: Manage compute and control plane machines in your cluster with machine sets, by deploying health checks, and applying autoscaling.
Manage container registries: Each OpenShift Container Platform cluster includes a built-in container registry for storing its images. You can also configure a separate Red Hat Quay registry to use with OpenShift Container Platform. The Quay.io web site provides a public container registry that stores OpenShift Container Platform containers and Operators.
Manage users and groups: Add users and groups with different levels of permissions to use or modify clusters.
Manage authentication: Learn how user, group, and API authentication works in OpenShift Container Platform. OpenShift Container Platform supports multiple identity providers.
Manage ingress, API server, and service certificates: OpenShift Container Platform creates certificates by default for the Ingress Operator, the API server, and for services needed by complex middleware applications that require encryption. You might need to change, add, or rotate these certificates.
Manage networking: The cluster network in OpenShift Container Platform is managed by the Cluster Network Operator (CNO). The CNO uses iptables rules in kube-proxy to direct traffic between nodes and pods running on those nodes. The Multus Container Network Interface adds the capability to attach multiple network interfaces to a pod. Using network policy features, you can isolate your pods or permit selected traffic.
Manage storage: OpenShift Container Platform allows cluster administrators to configure persistent storage using Red Hat OpenShift Data Foundation, AWS Elastic Block Store, NFS, iSCSI, Container Storage Interface (CSI), and more. You can expand persistent volumes, configure dynamic provisioning, and use CSI to configure, clone, and use snapshots of persistent storage.
Manage Operators: Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and installed on their clusters. After you install them, you can run, upgrade, back up, or otherwise manage the Operator on your cluster.
Use custom resource definitions (CRDs) to modify the cluster: Cluster features implemented with Operators can be modified with CRDs. Learn to create a CRD and manage resources from CRDs.
Set resource quotas: Choose from CPU, memory, and other system resources to set quotas.
Prune and reclaim resources: Reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs.
Scale and tune clusters: Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment.
Update a cluster:
Use the Cluster Version Operator (CVO) to upgrade your OpenShift Container Platform cluster. If an update is available from the OpenShift Update Service (OSUS), you apply that cluster update from either the OpenShift Container Platform web console or the OpenShift CLI (oc
).
Understanding the OpenShift Update Service: Learn about installing and managing a local OpenShift Update Service for recommending OpenShift Container Platform updates in disconnected environments.
Improving cluster stability in high latency environments using worker latency profiles: If your network has latency issues, you can use one of three worker latency profiles to help ensure that your control plane does not accidentally evict pods in case it cannot reach a worker node. You can configure or modify the profile at any time during the life of the cluster.
Work with OpenShift Logging: Learn about OpenShift Logging and configure different OpenShift Logging types, such as Elasticsearch, Fluentd, and Kibana.
Red Hat OpenShift distributed tracing platform: Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use the distributed tracing platform for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications.
Red Hat build of OpenTelemetry: Instrument, generate, collect, and export telemetry traces, metrics, and logs to analyze and understand your software’s performance and behavior. Use open source backends like Tempo or Prometheus, or use commercial offerings. Learn a single set of APIs and conventions, and own the data that you generate.
Network Observability: Observe network traffic for OpenShift Container Platform clusters by using eBPF technology to create and enrich network flows. You can view dashboards, customize alerts, and analyze network flow information for further insight and troubleshooting.
In-cluster monitoring: Learn to configure the monitoring stack. After configuring monitoring, use the web console to access monitoring dashboards. In addition to infrastructure metrics, you can also scrape and view metrics for your own services.
Remote health monitoring: OpenShift Container Platform collects anonymized aggregated information about your cluster. Using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve OpenShift Container Platform. You can view the data collected by remote health monitoring.