This is a cache of https://docs.openshift.com/container-platform/4.14/scalability_and_performance/telco_ref_design_specs/core/telco-core-rds-overview.html. It is a snapshot of the page at 2024-11-23T15:21:29.442+0000.
Telco core reference design overview - Reference design specifications | Scalability and performance | OpenShift Container Platform 4.14
×

The telco core reference design specification (RDS) configures a OpenShift Container Platform cluster running on commodity hardware to host telco core workloads.

OpenShift Container Platform 4.14 features for telco core

The following features that are included in OpenShift Container Platform 4.14 and are leveraged by the telco core reference design specification (RDS) have been added or updated.

Table 1. New features for telco core in OpenShift Container Platform 4.14
Feature Description

Support for running rootless Data Plane Development Kit (DPDK) workloads with kernel access by using the TAP CNI plugin

DPDK applications that inject traffic into the kernel can run in non-privileged pods with the help of the TAP CNI plugin.

Dynamic use of non-reserved CPUs for OVS

With this release, the Open vSwitch (OVS) networking stack can dynamically use non-reserved CPUs. The dynamic use of non-reserved CPUs occurs by default in performance-tuned clusters with a CPU manager policy set to static. The dynamic use of available, non-reserved CPUs maximizes compute resources for OVS and minimizes network latency for workloads during periods of high demand. OVS cannot use isolated CPUs assigned to containers in Guaranteed QoS pods. This separation avoids disruption to critical application workloads.

Enabling more control over the C-states for each pod

The PerformanceProfile supports perPodPowerManagement which provides more control over the C-states for pods. Now, instead of disabling C-states completely, you can specify a maximum latency in microseconds for C-states. You configure this option in the cpu-c-states.crio.io annotation, which helps to optimize power savings for high-priority applications by enabling some of the shallower C-states instead of disabling them completely.

Exclude SR-IOV network topology for NUMA-aware scheduling

You can exclude advertising Non-Uniform Memory Access (NUMA) nodes for the SR-IOV network to the Topology Manager. By not advertising NUMA nodes for the SR-IOV network, you can permit more flexible SR-IOV network deployments during NUMA-aware pod scheduling.

For example, in some scenarios, you want flexibility for how a pod is deployed. By not providing a NUMA node hint to the Topology Manager for the pod’s SR-IOV network resource, the Topology Manager can deploy the SR-IOV network resource and the pod CPU and memory resources to different NUMA nodes. In previous OpenShift Container Platform releases, the Topology Manager attempted to place all resources on the same NUMA node.

egress service resource to manage egress traffic for pods behind a load balancer (Technology Preview)

With this update, you can use an egressService custom resource (CR) to manage egress traffic for pods behind a load balancer service.

You can use the egressService CR to manage egress traffic in the following ways:

  • Assign the load balancer service’s IP address as the source IP address of egress traffic for pods behind the load balancer service.

  • Configure the egress traffic for pods behind a load balancer to a different network than the default node network.

  • Configuring an egress service