Optional, used when configuring
highly-available masters with the native
method to balance load between API master endpoints.
Within OpenShift Dedicated, Kubernetes manages containerized applications across a set of containers or hosts and provides mechanisms for deployment, maintenance, and application-scaling. The container runtime packages, instantiates, and runs containerized applications. A Kubernetes cluster consists of one or more masters and a set of nodes.
You can optionally configure your masters for high availability (HA) to ensure that the cluster has no single point of failure.
OpenShift Dedicated 3 uses Kubernetes 1.11 and Docker 1.13.1. |
The master is the host or hosts that contain the control plane components, including the API server, controller manager server, and etcd. The master manages nodes in its Kubernetes cluster and schedules pods to run on those nodes.
Component | Description |
---|---|
API Server |
The Kubernetes API server validates and configures the data for pods, services, and replication controllers. It also assigns pods to nodes and synchronizes pod information with service configuration. |
etcd |
etcd stores the persistent master state while other components watch etcd for changes to bring themselves into the desired state. etcd can be optionally configured for high availability, typically deployed with 2n+1 peer services. |
Controller Manager Server |
The controller manager server watches etcd for changes to replication controller objects and then uses the API to enforce the desired state. Several such processes create a cluster with one active leader at a time. |
haproxy |
Optional, used when configuring
highly-available masters with the |
The core control plane components, the API server and the controller manager components, run as static pods operated by the kubelet.
For masters that have etcd co-located on the same host, etcd is also moved to static pods. RPM-based etcd is still supported on etcd hosts that are not also masters.
In addition, the node components openshift-sdn and openvswitch are now run using a DaemonSet instead of a systemd service.
When using the native
HA method with haproxy, master components have the
following availability:
Role | Style | Notes |
---|---|---|
etcd |
Active-active |
Fully redundant deployment with load balancing. |
API Server |
Active-active |
Managed by haproxy. |
Controller Manager Server |
Active-passive |
One instance is elected as a cluster leader at a time. |
haproxy |
Active-passive |
Balances load between API master endpoints. |
A node provides the runtime environments for containers. Each node in a Kubernetes cluster has the required services to be managed by the master. Nodes also have the required services to run pods, including the container runtime, a kubelet, and a service proxy.
OpenShift Dedicated creates nodes from a cloud provider, physical systems, or virtual systems. Kubernetes interacts with node objects that are a representation of those nodes. The master uses the information from node objects to validate nodes with health checks. A node is ignored until it passes the health checks, and the master continues checking nodes until they are valid. The Kubernetes documentation has more information on node statuses and management.
Each node has a kubelet that updates the node as specified by a container manifest, which is a YAML file that describes a pod. The kubelet uses a set of manifests to ensure that its containers are started and that they continue to run.
A container manifest can be provided to a kubelet by:
A file path on the command line that is checked every 20 seconds.
An HTTP endpoint passed on the command line that is checked every 20 seconds.
The kubelet watching an etcd server, such as /registry/hosts/$(hostname -f), and acting on any changes.
The kubelet listening for HTTP and responding to a simple API to submit a new manifest.
Each node also runs a simple network proxy that reflects the services defined in the API on that node. This allows the node to do simple TCP and UDP stream forwarding across a set of back ends.
The following is an example node object definition in Kubernetes:
apiVersion: v1 (1)
kind: Node (2)
metadata:
creationTimestamp: null
labels: (3)
kubernetes.io/hostname: node1.example.com
name: node1.example.com (4)
spec:
externalID: node1.example.com (5)
status:
nodeInfo:
bootID: ""
containerRuntimeVersion: ""
kernelVersion: ""
kubeProxyVersion: ""
kubeletVersion: ""
machineID: ""
osImage: ""
systemUUID: ""
1 | apiVersion defines the API version to use. |
2 | kind set to Node identifies this as a definition for a node
object. |
3 | metadata.labels lists any
labels that have been added
to the node. |
4 | metadata.name is a required value that defines the name of the node
object. This value is shown in the NAME column when running the oc get nodes
command. |
5 | spec.externalID defines the fully-qualified domain name where the node
can be reached. Defaults to the metadata.name value when empty. |
A node’s configuration is bootstrapped from the master, which means nodes pull their pre-defined configuration and client and server certificates from the master. This allows faster node start-up by reducing the differences between nodes, as well as centralizing more configuration and letting the cluster converge on the desired state. Certificate rotation and centralized certificate management are enabled by default.
When node services are started, the node checks if the /etc/origin/node/node.kubeconfig file and other node configuration files exist before joining the cluster. If they do not, the node pulls the configuration from the master, then joins the cluster.
ConfigMaps are used to store the node configuration in the cluster, which populates the configuration file on the node host at /etc/origin/node/node-config.yaml.