$ openshift-install create ignition-configs --dir $HOME/testconfig
Red Hat Enterprise Linux CoreOS (RHCOS) represents the next generation of single-purpose container operating system technology. Created by the same development teams that created Red Hat Enterprise Linux Atomic Host and CoreOS Container Linux, RHCOS combines the quality standards of Red Hat Enterprise Linux (RHEL) with the automated, remote upgrade features from Container Linux.
RHCOS is supported only as a component of OpenShift Container Platform 4.4 for all OpenShift Container Platform machines. RHCOS is the only supported operating system for OpenShift Container Platform control plane, or master, machines. While RHCOS is the default operating system for all cluster machines, you can create compute machines, which are also known as worker machines, that use RHEL as their operating system. There are two general ways RHCOS is deployed in OpenShift Container Platform 4.4:
If you install your cluster on infrastructure that the cluster provisions, RHCOS images are downloaded to the target platform during installation, and suitable Ignition config files, which control the RHCOS configuration, are used to deploy the machines.
If you install your cluster on infrastructure that you manage, you must follow the installation documentation to obtain the RHCOS images, generate Ignition config files, and use the Ignition config files to provision your machines.
The following list describes key features of the RHCOS operating system:
Based on RHEL: The underlying operating system consists primarily of RHEL components. The same quality, security, and control measures that support RHEL also support RHCOS. For example, RHCOS software is in RPM packages, and each RHCOS system starts up with a RHEL kernel and a set of services that are managed by the systemd init system.
Controlled immutability: Although it contains RHEL components, RHCOS is designed to be managed more tightly than a default RHEL installation. Management is performed remotely from the OpenShift Container Platform cluster. When you set up your RHCOS machines, you can modify only a few system settings. This controlled immutability allows OpenShift Container Platform to store the latest state of RHCOS systems in the cluster so it is always able to create additional machines and perform updates based on the latest RHCOS configurations.
CRI-O container runtime: Although RHCOS contains features for running the OCI- and libcontainer-formatted containers that Docker requires, it incorporates the CRI-O container engine instead of the Docker container engine. By focusing on features needed by Kubernetes platforms, such as OpenShift Container Platform, CRI-O can offer specific compatibility with different Kubernetes versions. CRI-O also offers a smaller footprint and reduced attack surface than is possible with container engines that offer a larger feature set. At the moment, CRI-O is only available as a container engine within OpenShift Container Platform clusters.
Set of container tools: For tasks such as building, copying, and otherwise
managing containers, RHCOS replaces the Docker CLI tool with a compatible
set of container tools. The podman CLI tool supports many container runtime
features, such as running, starting, stopping, listing, and removing containers
and container images. The skopeo CLI tool can copy, authenticate, and sign
images. You can use the crictl
CLI tool to work with containers and pods from the
CRI-O container engine. While direct use of these tools in RHCOS is
discouraged, you can use them for debugging purposes.
rpm-ostree upgrades: RHCOS features transactional upgrades using
the rpm-ostree
system.
Updates are delivered by means of container images and are part of the
OpenShift Container Platform update process. When deployed, the container image is pulled,
extracted, and written to disk, then the bootloader is modified to boot into
the new version. The machine will reboot into the update in a rolling manner to
ensure cluster capacity is minimally impacted.
Updated through the Machine Config Operator:
In OpenShift Container Platform, the Machine Config Operator handles operating system upgrades.
Instead of upgrading individual packages, as is done with yum
upgrades, rpm-ostree
delivers upgrades of the OS as an atomic unit. The
new OS deployment is staged during upgrades and goes into effect on the next reboot.
If something goes wrong with the upgrade, a single rollback and reboot returns the
system to the previous state. RHCOS upgrades in OpenShift Container Platform are performed
during cluster updates.
For RHCOS systems, the layout of the rpm-ostree
file system has the
following characteristics:
/usr
is where the operating system binaries and libraries are stored and is
read-only. We do not support altering this.
/etc
, /boot
, /var
are writable on the system but only intended to be altered
by the Machine Config Operator.
/var/lib/containers
is the graph storage location for storing container
images.
RHCOS is designed to deploy on an OpenShift Container Platform cluster with a minimal amount of user configuration. In its most basic form, this consists of:
Starting with a provisioned infrastructure, such as on AWS, or provisioning the infrastructure yourself.
Supplying a few pieces of information, such as credentials and cluster name,
in an install-config.yaml
file when running openshift-install
.
Because RHCOS systems in OpenShift Container Platform are designed to be fully managed from the OpenShift Container Platform cluster after that, directly changing an RHCOS machine is discouraged. Although limited direct access to RHCOS machines cluster can be accomplished for debugging purposes, you should not directly configure RHCOS systems. Instead, if you need to add or change features on your OpenShift Container Platform nodes, consider making changes in the following ways:
Kubernetes workload objects (DaemonSet
, Deployment
, etc.): If you need to
add services or other user-level features to your cluster, consider adding them as
Kubernetes workload objects. Keeping those features outside of specific node
configurations is the best way to reduce the risk of breaking the cluster on
subsequent upgrades.
Day-2 customizations: If possible, bring up a cluster without making any customizations to cluster nodes and make necessary node changes after the cluster is up. Those changes are easier to track later and less likely to break updates. Creating machine configs or modifying Operator custom resources are ways of making these customizations.
Day-1 customizations: For customizations that you must implement when the
cluster first comes up, there are ways of modifying your cluster so changes are
implemented on first boot.
Day-1 customizations can be done through Ignition configs and manifest files
during openshift-install
or by adding boot options during ISO installs
provisioned by the user.
Here are examples of customizations you could do on day-1:
Kernel arguments: If particular kernel features or tuning is needed on nodes when the cluster first boots.
Disk encryption: If your security needs require that the root file system on the nodes are encrypted, such as with FIPS support.
Kernel modules: If a particular hardware device, such as a network card or video card, does not have a usable module available by default in the Linux kernel.
Chronyd: If you want to provide specific clock settings to your nodes, such as the location of time servers.
To accomplish these tasks, you can augment the openshift-install
process to include additional
objects such as MachineConfig
objects.
Those procedures that result in creating machine configs can be passed to the Machine Config Operator
after the cluster is up.
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending |
Differences between RHCOS installations for OpenShift Container Platform are based on whether you are deploying on an infrastructure provisioned by the installer or by the user:
Installer provisioned: Some cloud environments offer pre-configured infrastructures that allow you to bring up an OpenShift Container Platform cluster with minimal configuration. For these types of installations, you can supply Ignition configs that place content on each node so it is there when the cluster first boots.
User provisioned: If you are provisioning your own infrastructure, you have more flexibility in how you add content to a RHCOS node. For example, you could add kernel arguments when you boot the RHCOS ISO installer to install each system. However, in most cases where configuration is required on the operating system itself, it is best to provide that configuration through an Ignition config.
The Ignition facility runs only when the RHCOS system is first set up. After that, Ignition configs can be supplied later using the machine config.
Ignition is the utility that is used by RHCOS to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. On first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines.
Whether you are installing your cluster or adding machines to it, Ignition always performs the initial configuration of the OpenShift Container Platform cluster machines. Most of the actual system setup happens on each machine itself. For each machine, Ignition takes the RHCOS image and boots the RHCOS kernel. Options on the kernel command line, identify the type of deployment and the location of the Ignition-enabled initial Ram disk (initramfs).
To create machines by using Ignition, you need Ignition config files. The
OpenShift Container Platform installation program creates the Ignition config files that you
need to deploy your cluster. These files are based on the information that you
provide to the installation program directly or through an install-config.yaml
file.
The way that Ignition configures machines is similar to how tools like cloud-init or Linux Anaconda kickstart configure systems, but with some important differences:
Ignition runs from an initial RAM disk that is separate from the system you are installing to. Because of that, Ignition can repartition disks, set up file systems, and perform other changes to the machine’s permanent file system. In contrast, cloud-init runs as part of a machine’s init system when the system boots, so making foundational changes to things like disk partitions cannot be done as easily. With cloud-init, it is also difficult to reconfigure the boot process while you are in the middle of the node’s boot process.
Ignition is meant to initialize systems, not change existing systems. After a machine initializes and the kernel is running from the installed system, the Machine Config Operator from the OpenShift Container Platform cluster completes all future machine configuration.
Instead of completing a defined set of actions, Ignition implements a declarative configuration. It checks that all partitions, files, services, and other items are in place before the new machine starts. It then makes the changes, like copying files to disk that are necessary for the new machine to meet the specified configuration.
After Ignition finishes configuring a machine, the kernel keeps running but discards the initial RAM disk and pivots to the installed system on disk. All of the new system services and other features start without requiring a system reboot.
Because Ignition confirms that all new machines meet the declared configuration, you cannot have a partially-configured machine. If a machine’s setup fails, the initialization process does not finish, and Ignition does not start the new machine. Your cluster will never contain partially-configured machines. If Ignition cannot complete, the machine is not added to the cluster. You must add a new machine instead. This behavior prevents the difficult case of debugging a machine when the results of a failed configuration task are not known until something that depended on it fails at a later date.
If there is a problem with an Ignition config that causes the setup of a machine to fail, Ignition will not try to use the same config to set up another machine. For example, a failure could result from an Ignition config made up of a parent and child config that both want to create the same file. A failure in such a case would prevent that Ignition config from being used again to set up an other machines, until the problem is resolved.
If you have multiple Ignition config files, you get a union of that set of configs. Because Ignition is declarative, conflicts between the configs could cause Ignition to fail to set up the machine. The order of information in those files does not matter. Ignition will sort and implement each setting in ways that make the most sense. For example, if a file needs a directory several levels deep, if another file needs a directory along that path, the later file is created first. Ignition sorts and creates all files, directories, and links by depth.
Because Ignition can start with a completely empty hard disk, it can do something cloud-init cannot do: set up systems on bare metal from scratch (using features such as PXE boot). In the bare metal case, the Ignition config is injected into the boot partition so Ignition can find it and configure the system correctly.
The Ignition process for an RHCOS machine in an OpenShift Container Platform cluster involves the following steps:
The machine gets its Ignition config file. Master machines get their Ignition config files from the bootstrap machine, and worker machines get Ignition config files from a master.
Ignition creates disk partitions, file systems, directories, and links on the machine. It supports RAID arrays but does not support LVM volumes
Ignition mounts the root of the permanent file system to the /sysroot
directory in the
initramfs and starts working in that /sysroot
directory.
Ignition configures all defined file systems and sets them up to mount appropriately at runtime.
Ignition runs systemd
temporary files to populate required files in the
/var
directory.
Ignition runs the Ignition config files to set up users, systemd unit files, and other configuration files.
Ignition unmounts all components in the permanent system that were mounted in the initramfs.
Ignition starts up new machine’s init process which, in turn, starts up all other services on the machine that run during system boot.
The machine is then ready to join the cluster and does not require a reboot.
To see the Ignition config file used to deploy the bootstrap machine, run the following command:
$ openshift-install create ignition-configs --dir $HOME/testconfig
After you answer a few questions, the bootstrap.ign
, master.ign
, and
worker.ign
files appear in the directory you entered.
To see the contents of the bootstrap.ign
file, pipe it through the jq
filter.
Here’s a snippet from that file:
$ cat $HOME/testconfig/bootstrap.ign | jq \\{ "ignition": \\{ "config": \\{}, "storage": \\{ "files": [ \\{ "filesystem": "root", "path": "/etc/motd", "user": \\{ "name": "root" }, "append": true, "contents": \\{ "source": "data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2UgaXMgImJvb3RrdWJlLnNlcnZpY2UiLiBUbyB3YXRjaCBpdHMgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IGJvb3RrdWJlLnNlcnZpY2UK",
To decode the contents of a file listed in the bootstrap.ign
file, pipe the
base64-encoded data string representing the contents of that file to the base64
-d
command. Here’s an example using the contents of the /etc/motd
file added to
the bootstrap machine from the output shown above:
$ echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2UgaXMgImJvb3RrdWJlLnNlcnZpY2UiLiBUbyB3YXRjaCBpdHMgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IGJvb3RrdWJlLnNlcnZpY2UK | base64 -d This is the bootstrap machine; it will be destroyed when the master is fully up. The primary service is "bootkube.service". To watch its status, run, e.g.: journalctl -b -f -u bootkube.service
Repeat those commands on the master.ign
and worker.ign
files to see the source
of Ignition config files for each of those machine types. You should see a line
like the following for the worker.ign
, identifying how it gets its Ignition
config from the bootstrap machine:
"source": "https://api.myign.develcluster.example.com:22623/config/worker",
Here are a few things you can learn from the bootstrap.ign
file:
Format: The format of the file is defined in the Ignition config spec. Files of the same format are used later by the MCO to merge changes into a machine’s configuration.
Contents: Because the bootstrap machine serves the Ignition configs for other
machines, both master and worker machine Ignition config information is stored in the
bootstrap.ign
, along with the bootstrap machine’s configuration.
Size: The file is more than 1300 lines long, with path to various types of resources.
The content of each file that will be copied to the machine is actually encoded into data URLs, which tends to make the content a bit clumsy to read. (Use the jq and base64 commands shown previously to make the content more readable.)
Configuration: The different sections of the Ignition config file are generally meant to contain files that are just dropped into a machine’s file system, rather than commands to modify existing files. For example, instead of having a section on NFS that configures that service, you would just add an NFS configuration file, which would then be started by the init process when the system comes up.
users: A user named core is created, with your ssh key assigned to that user. This allows you to log in to the cluster with that user name and your credentials.
storage: The storage section identifies files that are added to each machine. A
few notable files include /root/.docker/config.json
(which provides credentials
your cluster needs to pull from container image registries) and a bunch of
manifest files in /opt/openshift/manifests
that are used to configure your cluster.
systemd: The systemd section holds content used to create systemd unit files. Those files are used to start up services at boot time, as well as manage those services on running systems.
Primitives: Ignition also exposes low-level primitives that other tools can build on.
Machine config pools manage a cluster of nodes and their corresponding machine configs. Machine configs contain configuration information for a cluster. To list all machine config pools that are known:
$ oc get machineconfigpools NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False
To list all machine configs:
$ oc get machineconfig NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 2.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 2.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 2.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 2.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 2.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 2.2.0 16m
The Machine Config Operator acts somewhat differently than Ignition when it
comes to applying these machine configs. The machine configs are read in order
(from 00* to 99*). Labels inside the machine configs identify the type of node
each is for (master or worker). If the same file appears in multiple
machine config files, the last one wins. So, for example, any file that appears
in a 99* file would replace the same file that appeared in a 00* file.
The input MachineConfig
objects are unioned into a "rendered" MachineConfig
object, which will be used as a target by the operator and is the value you
can see in the machine config pool.
To see what files are being managed from a machine config, look for "Path:"
inside a particular MachineConfig
object. For example:
$ oc describe machineconfigs 01-worker-container-runtime | grep Path: Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf
Be sure to give the machine config file a later name (such as 10-worker-container-runtime). Keep in mind that the content of each file is in URL-style data. Then apply the new machine config to the cluster.