A node provides the runtime environments for containers. Each node in a
Kubernetes cluster has the required services to be managed by the master. Nodes
also have the required services to run pods, including the container runtime, a
kubelet, and a service proxy.
Azure Red Hat OpenShift creates nodes from a cloud provider, physical systems, or virtual
systems. Kubernetes interacts with node objects
that are a representation of those nodes. The master uses the information from
node objects to validate nodes with health checks. A node is ignored until it
passes the health checks, and the master continues checking nodes until they are
valid. The Kubernetes documentation
has more information on node statuses and management.
Azure Red Hat OpenShift creates two types of nodes:
-
Infrastructure nodes, which run cluster management Pods
-
Compute nodes, which run customer applications and builds
Kubelet
Each node has a kubelet that updates the node as specified by a container
manifest, which is a YAML file that describes a pod. The kubelet uses a set of
manifests to ensure that its containers are started and that they continue to
run.
A container manifest can be provided to a kubelet by:
-
A file path on the command line that is checked every 20 seconds.
-
An HTTP endpoint passed on the command line that is checked every 20 seconds.
-
The kubelet watching an etcd server, such as /registry/hosts/$(hostname -f), and acting on any changes.
-
The kubelet listening for HTTP and responding to a simple API to submit a new
manifest.
Service Proxy
Each node also runs a simple network proxy that reflects the services defined in
the API on that node. This allows the node to do simple TCP and UDP stream
forwarding across a set of back ends.
Node Object Definition
The following is an example node object definition in Kubernetes:
apiVersion: v1 (1)
kind: Node (2)
metadata:
creationTimestamp: null
labels: (3)
kubernetes.io/hostname: node1.example.com
name: node1.example.com (4)
spec:
externalID: node1.example.com (5)
status:
nodeInfo:
bootID: ""
containerRuntimeVersion: ""
kernelVersion: ""
kubeProxyVersion: ""
kubeletVersion: ""
machineID: ""
osImage: ""
systemUUID: ""
1 |
apiVersion defines the API version to use. |
2 |
kind set to Node identifies this as a definition for a node
object. |
3 |
metadata.labels lists any
labels that have been added
to the node. |
4 |
metadata.name is a required value that defines the name of the node
object. This value is shown in the NAME column when running the oc get nodes
command. |
5 |
spec.externalID defines the fully-qualified domain name where the node
can be reached. Defaults to the metadata.name value when empty. |
Node Bootstrapping
A node’s configuration is bootstrapped from
the master, which means nodes pull their pre-defined configuration and client
and server certificates from the master. This allows faster node start-up by
reducing the differences between nodes, as well as centralizing more
configuration and letting the cluster converge on the desired state. certificate
rotation and centralized certificate management are enabled by default.
Figure 1. Node bootstrapping workflow overview
When node services are started, the node checks if the
/etc/origin/node/node.kubeconfig file and other node configuration files
exist before joining the cluster. If they do not, the node pulls the
configuration from the master, then joins the cluster.
ConfigMaps are used
to store the node configuration in the cluster, which populates the
configuration file on the node host at /etc/origin/node/node-config.yaml.