The master is the host or hosts that contain the control plane components,
including the API server, controller manager server, and etcd. The master
manages nodes in its Kubernetes cluster and schedules
pods to run on those nodes.
Control Plane Static Pods
The core control plane components, the API
server and the controller manager components, run as
static pods
operated by the kubelet.
For masters that have etcd co-located on the same host, etcd is also moved to
static pods. RPM-based etcd is still supported on etcd hosts that are not also
masters.
In addition, the node components openshift-sdn and
openvswitch are now run using a DaemonSet instead of a systemd service.
Figure 1. Control plane host architecture changes
Even with control plane components running as static pods, master hosts still
source their configuration from the /etc/origin/master/master-config.yaml
file, as described in the
Master and Node Configuration topic.
Startup Sequence Overview
Hyperkube is a binary that contains all of Kubernetes (kube-apiserver, controller-manager, scheduler, proxy, and kubelet). On startup, the kubelet creates the kubepods.slice. Next, the kubelet creates the QoS-level slices burstable.slice and best-effort.slice inside the kubepods.slice. When a pod starts, the kubelet creats a pod-level slice with the format pod<UUID-of-pod>.slice
and passes that path to the runtime on the other side of the Container Runtime Interface (CRI). Docker or CRI-O then creates the container-level slices inside the pod-level slice.
Mirror Pods
The kubelet on master nodes automatically creates mirror pods on the API
server for each of the control plane static pods so that they are visible in the
cluster in the kube-system project. Manifests for these static pods are
installed by default by the openshift-ansible installer, located in the
/etc/origin/node/pods directory on the master host.
These pods have the following hostPath
volumes defined:
/etc/origin/master
|
Contains all certificates, configuration files, and the admin.kubeconfig file.
|
/var/lib/origin
|
Contains volumes and potential core dumps of the binary.
|
/etc/origin/cloudprovider
|
Contains cloud provider specific configuration (AWS, Azure, etc.).
|
/usr/libexec/kubernetes/kubelet-plugins
|
Contains additional third party volume plug-ins.
|
/etc/origin/kubelet-plugins
|
Contains additional third party volume plug-ins for system containers.
|
The set of operations you can do on the static pods is limited. For example:
$ oc logs master-api-<hostname> -n kube-system
returns the standard output from the API server. However:
$ oc delete pod master-api-<hostname> -n kube-system
will not actually delete the pod.
As another example, a cluster administrator might want to perform a common
operation, such as increasing the loglevel
of the API server to provide more
verbose data if a problem occurs. You must edit the
/etc/origin/master/master.env file, where the --loglevel
parameter in the
OPTIONS
variable can be modified, because this value is passed to the process running
inside the container. Changes require a restart of the process running inside
the container.
Restarting Master Services
To restart control plane services running in control plane static pods, use the
master-restart
command on the master host.
To restart the master API:
To restart the controllers:
# master-restart controllers
Viewing Master Service Logs
To view logs for control plane services running in control plane static pods,
use the master-logs
command for the respective component:
# master-logs controllers controllers
High Availability Masters
You can optionally configure your masters for high
availability (HA) to ensure that the cluster has no single point of failure.
To mitigate concerns about availability of the master, two activities are
recommended:
-
A runbook entry should be created for
reconstructing the master. A runbook entry is a necessary backstop for any
highly-available service. Additional solutions merely control the frequency
that the runbook must be consulted. For example, a cold standby of the master
host can adequately fulfill SLAs that require no more than minutes of downtime
for creation of new applications or recovery of failed application components.
-
Use a high availability solution to configure your masters and ensure that the
cluster has no single point of failure. The
cluster
installation documentation provides specific examples using the native
HA method and
configuring haproxy. You can also take the concepts and apply them towards your
existing HA solutions using the native
method instead of haproxy.
|
In production OKD clusters, you must maintain high availability
of the API Server load balancer. If the API Server load balancer is not
available, nodes cannot report their status, all their pods are marked dead,
and the pods' endpoints are removed from the service.
In addition to configuring HA for OKD, you must separately configure
HA for the API Server load balancer. To configure HA, it is much preferred to
integrate an enterprise load balancer (LB) such as an F5 Big-IPâ„¢ or a Citrix
Netscalerâ„¢ appliance. If such solutions are not available, it is possible to
run multiple haproxy load balancers and use Keepalived to provide a floating
virtual IP address for HA. However, this solution is not recommended for
production instances.
|
When using the native
HA method with haproxy, master components have the
following availability:
Table 2. Availability Matrix with haproxy
Role |
Style |
Notes |
etcd |
Active-active |
Fully redundant deployment with load balancing.
Can be installed on separate hosts or collocated on master hosts. |
API Server |
Active-active |
Managed by haproxy. |
Controller Manager Server |
Active-passive |
One instance is elected as a cluster leader at a time. |
haproxy |
Active-passive |
Balances load between API master endpoints. |
While clustered etcd requires an odd number of hosts for quorum, the master
services have no quorum or requirement that they have an odd number of hosts.
However, since you need at least two master services for HA, it is common to
maintain a uniform odd number of hosts when collocating master services and
etcd.