This is a cache of https://docs.okd.io/4.14/nodes/containers/nodes-containers-using.html. It is a snapshot of the page at 2024-11-26T22:57:17.293+0000.
Understanding containers - Working with containers | Nodes | OKD 4.14
×

The basic units of OKD applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.

Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service (often called a "micro-service"), such as a web server or a database, though containers can be used for arbitrary workloads.

The Linux kernel has been incorporating capabilities for container technologies for years. OKD and Kubernetes add the ability to orchestrate containers across multi-host installations.

About containers and RHEL kernel memory

Due to Red Hat Enterprise Linux (RHEL) behavior, a container on a node with high CPU usage might seem to consume more memory than expected. The higher memory consumption could be caused by the kmem_cache in the RHEL kernel. The RHEL kernel creates a kmem_cache for each cgroup. For added performance, the kmem_cache contains a cpu_cache, and a node cache for any NUMA nodes. These caches all consume kernel memory.

The amount of memory stored in those caches is proportional to the number of CPUs that the system uses. As a result, a higher number of CPUs results in a greater amount of kernel memory being held in these caches. Higher amounts of kernel memory in these caches can cause OKD containers to exceed the configured memory limits, resulting in the container being killed.

To avoid losing containers due to kernel memory issues, ensure that the containers request sufficient memory. You can use the following formula to estimate the amount of memory consumed by the kmem_cache, where nproc is the number of processing units available that are reported by the nproc command. The lower limit of container requests should be this value plus the container memory requirements:

$(nproc) X 1/2 MiB

About the container engine and container runtime

A container engine is a piece of software that processes user requests, including command line options and image pulls. The container engine uses a container runtime, also called a lower-level container runtime, to run and manage the components required to deploy and operate containers. You likely will not need to interact with the container engine or container runtime.

The OKD documentation uses the term container runtime to refer to the lower-level container runtime. Other documentation can refer to the container engine as the container runtime.

OKD uses CRI-O as the container engine and runC or crun as the container runtime. The default container runtime is runC. Both container runtimes adhere to the Open Container Initiative (OCI) runtime specifications.

CRI-O is a Kubernetes-native container engine implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. The CRI-O container engine runs as a systemd service on each OKD cluster node.

runC, developed by Docker and maintained by the Open Container Project, is a lightweight, portable container runtime written in Go. crun, developed by Red Hat, is a fast and low-memory container runtime fully written in C. As of OKD 4.14, you can select between the two.

crun has several improvements over runC, including:

  • Smaller binary

  • Quicker processing

  • Lower memory footprint

runC has some benefits over crun, including:

  • Most popular OCI container runtime.

  • Longer tenure in production.

  • Default container runtime of CRI-O.

You can move between the two container runtimes as needed.

For information on setting which container runtime to use, see Creating a ContainerRuntimeConfig CR to edit CRI-O parameters.