# cat /etc/origin/node/node-config.yaml ... kubeletArguments: ... feature-gates: - HugePages=true # systemctl restart atomic-openshift-node
Managing huge pages is a Technology Preview feature only. |
Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 262,144 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.
A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. In order to use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.
In OKD, applications in a pod can allocate and consume pre-allocated huge pages. This topic describes how.
Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size.
HugePages
must be set to true
across the system:
# cat /etc/origin/node/node-config.yaml ... kubeletArguments: ... feature-gates: - HugePages=true # systemctl restart atomic-openshift-node
The master API must be enabled for feature-gates
with HugePages=true
:
# cat /etc/origin/master/master-config.yaml ... kubernetesmasterConfig: apiServerArguments: ... feature-gates: - HugePages=true # systemctl restart atomic-openshift-master-api
Huge pages can be consumed via container level resource requirements using the
resource name hugepages-<size>
, where size is the most compact binary
notation using integer values supported on a particular node. For example, if a
node supports 2048KiB page sizes, it will expose a schedulable resource
hugepages-2Mi
. Unlike CPU or memory, huge pages do not support overcommitment.
kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi (1) volumes: - name: hugepage emptyDir: medium: HugePages
1 | Specify the amount of memory for hugepages as the exact amount to be
allocated. Do not specify this value as the amount of memory for hugepages
multiplied by the size of the page. For example, given a huge page size of 2MB,
if you want to use 100MB of huge-page-backed RAM for your application, then you
would allocate 50 huge pages. OKD handles the math for you. As in
the above example, you can specify 100MB directly. |
Some platforms support multiple huge page sizes. To allocate huge pages of a
specific size, precede the huge pages boot command parameters with a huge page
size selection parameter hugepagesz=<size>
. The <size>
value must be
specified in bytes with an optional scale suffix [kKmMgG
]. The default huge
page size can be defined with the default_hugepagesz=<size>
boot parameter.
See
Configuring Transparent Huge Pages for more information.
Huge page requests must equal the limits. This is the default if limits are specified, but requests are not.
Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration.
EmptyDir
volumes backed by huge pages must not consume more huge page memory
than the pod request.
Applications that consume huge pages via shmget()
with SHM_HUGETLB
must run
with a supplemental group that matches proc/sys/vm/hugetlb_shm_group.