This is a cache of https://docs.openshift.com/container-platform/4.16/scalability_and_performance/telco_ref_design_specs/core/telco-core-ref-crs.html. It is a snapshot of the page at 2024-11-29T10:30:22.244+0000.
Core reference design configuration CRs - Reference design specifications | Scalability and performance | OpenShift Container Platform 4.16
×

Use the following custom resources (CRs) to configure and deploy OpenShift Container Platform clusters with the telco core profile. Use the CRs to form the common baseline used in all the specific use models unless otherwise indicated.

Extracting the telco core reference design configuration CRs

You can extract the complete set of custom resources (CRs) for the telco core profile from the telco-core-rds-rhel9 container image. The container image has both the required CRs, and the optional CRs, for the telco core profile.

Prerequisites
  • You have installed podman.

Procedure
  • Extract the content from the telco-core-rds-rhel9 container image by running the following commands:

    $ mkdir -p ./out
    $ podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.16 | base64 -d | tar xv -C out
Verification
  • The out directory has the following folder structure. You can view the telco core CRs in the out/telco-core-rds/ directory.

    Example output
    out/
    └── telco-core-rds
        ├── configuration
        │   └── reference-crs
        │       ├── optional
        │       │   ├── logging
        │       │   ├── networking
        │       │   │   └── multus
        │       │   │       └── tap_cni
        │       │   ├── other
        │       │   └── tuning
        │       └── required
        │           ├── networking
        │           │   ├── metallb
        │           │   ├── multinetworkpolicy
        │           │   └── sriov
        │           ├── other
        │           ├── performance
        │           ├── scheduling
        │           └── storage
        │               └── odf-external
        └── install

Resource Tuning reference CRs

Table 1. Resource Tuning CRs
Component Reference CR Optional New in this release

System reserved capacity

control-plane-system-reserved.yaml

Yes

No

Storage reference CRs

Table 2. Storage CRs
Component Reference CR Optional New in this release

External ODF configuration

01-rook-ceph-external-cluster-details.secret.yaml

No

No

External ODF configuration

02-ocs-external-storagecluster.yaml

No

No

External ODF configuration

odfNS.yaml

No

No

External ODF configuration

odfOperGroup.yaml

No

No

External ODF configuration

odfSubscription.yaml

No

No

Networking reference CRs

Table 3. Networking CRs
Component Reference CR Optional New in this release

Baseline

Network.yaml

No

No

Baseline

networkAttachmentDefinition.yaml

Yes

Yes

Load balancer

addr-pool.yaml

No

No

Load balancer

bfd-profile.yaml

No

No

Load balancer

bgp-advr.yaml

No

No

Load balancer

bgp-peer.yaml

No

No

Load balancer

community.yaml

No

Yes

Load balancer

metallb.yaml

No

No

Load balancer

metallbNS.yaml

Yes

No

Load balancer

metallbOperGroup.yaml

Yes

No

Load balancer

metallbSubscription.yaml

No

No

Multus - Tap CNI for rootless DPDK pod

mc_rootless_pods_selinux.yaml

No

No

NMState Operator

NMState.yaml

No

Yes

NMState Operator

NMStateNS.yaml

No

Yes

NMState Operator

NMStateOperGroup.yaml

No

Yes

NMState Operator

NMStateSubscription.yaml

No

Yes

SR-IOV Network Operator

sriovNetwork.yaml

Yes

No

SR-IOV Network Operator

sriovNetworkNodePolicy.yaml

No

No

SR-IOV Network Operator

SriovOperatorConfig.yaml

No

No

SR-IOV Network Operator

SriovSubscription.yaml

No

No

SR-IOV Network Operator

SriovSubscriptionNS.yaml

No

No

SR-IOV Network Operator

SriovSubscriptionOperGroup.yaml

No

No

Scheduling reference CRs

Table 4. Scheduling CRs
Component Reference CR Optional New in this release

NUMA-aware scheduler

nrop.yaml

No

No

NUMA-aware scheduler

NROPSubscription.yaml

No

No

NUMA-aware scheduler

NROPSubscriptionNS.yaml

No

No

NUMA-aware scheduler

NROPSubscriptionOperGroup.yaml

No

No

NUMA-aware scheduler

sched.yaml

No

No

NUMA-aware scheduler

Scheduler.yaml

No

Yes

Other reference CRs

Table 5. Other CRs
Component Reference CR Optional New in this release

Additional kernel modules

control-plane-load-kernel-modules.yaml

Yes

No

Additional kernel modules

sctp_module_mc.yaml

Yes

No

Additional kernel modules

worker-load-kernel-modules.yaml

Yes

No

Cluster logging

ClusterLogForwarder.yaml

No

No

Cluster logging

ClusterLogging.yaml

No

No

Cluster logging

ClusterLogNS.yaml

No

No

Cluster logging

ClusterLogOperGroup.yaml

No

No

Cluster logging

ClusterLogSubscription.yaml

No

No

Disconnected configuration

catalog-source.yaml

No

No

Disconnected configuration

icsp.yaml

No

No

Disconnected configuration

operator-hub.yaml

No

No

Monitoring and observability

monitoring-config-cm.yaml

Yes

No

Power management

PerformanceProfile.yaml

No

No

YAML reference

Resource Tuning reference YAML

control-plane-system-reserved.yaml
# optional
# count: 1
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  name: autosizing-master
spec:
  autoSizingReserved: true
  machineConfigPoolSelector:
    matchLabels:
      pools.operator.machineconfiguration.openshift.io/master: ""

Storage reference YAML

01-rook-ceph-external-cluster-details.secret.yaml
# required
# count: 1
---
apiVersion: v1
kind: Secret
metadata:
  name: rook-ceph-external-cluster-details
  namespace: openshift-storage
type: Opaque
data:
  # encoded content has been made generic
  external_cluster_details: eyJuYW1lIjoicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLCJraW5kIjoiQ29uZmlnTWFwIiwiZGF0YSI6eyJkYXRhIjoiY2VwaHVzYTE9MS4yLjMuNDo2Nzg5IiwibWF4TW9uSWQiOiIwIiwibWFwcGluZyI6Int9In19LHsibmFtZSI6InJvb2stY2VwaC1tb24iLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJhZG1pbi1zZWNyZXQiOiJhZG1pbi1zZWNyZXQiLCJmc2lkIjoiMTExMTExMTEtMTExMS0xMTExLTExMTEtMTExMTExMTExMTExIiwibW9uLXNlY3JldCI6Im1vbi1zZWNyZXQifX0seyJuYW1lIjoicm9vay1jZXBoLW9wZXJhdG9yLWNyZWRzIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsidXNlcklEIjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoibW9uaXRvcmluZy1lbmRwb2ludCIsImtpbmQiOiJDZXBoQ2x1c3RlciIsImRhdGEiOnsiTW9uaXRvcmluZ0VuZHBvaW50IjoiMS4yLjMuNCwxLjIuMy4zLDEuMi4zLjIiLCJNb25pdG9yaW5nUG9ydCI6IjkyODMifX0seyJuYW1lIjoiY2VwaC1yYmQiLCJraW5kIjoiU3RvcmFnZUNsYXNzIiwiZGF0YSI6eyJwb29sIjoib2RmX3Bvb2wifX0seyJuYW1lIjoicm9vay1jc2ktcmJkLW5vZGUiLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJ1c2VySUQiOiJjc2ktcmJkLW5vZGUiLCJ1c2VyS2V5IjoiIn19LHsibmFtZSI6InJvb2stY3NpLXJiZC1wcm92aXNpb25lciIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7InVzZXJJRCI6ImNzaS1yYmQtcHJvdmlzaW9uZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoicm9vay1jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsiYWRtaW5JRCI6ImNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCJhZG1pbktleSI6IiJ9fSx7Im5hbWUiOiJyb29rLWNzaS1jZXBoZnMtbm9kZSIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7ImFkbWluSUQiOiJjc2ktY2VwaGZzLW5vZGUiLCJhZG1pbktleSI6ImMyVmpjbVYwIn19LHsibmFtZSI6ImNlcGhmcyIsImtpbmQiOiJTdG9yYWdlQ2xhc3MiLCJkYXRhIjp7ImZzTmFtZSI6ImNlcGhmcyIsInBvb2wiOiJtYW5pbGFfZGF0YSJ9fQ==
02-ocs-external-storagecluster.yaml
# required
# count: 1
---
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
  name: ocs-external-storagecluster
  namespace: openshift-storage
spec:
  externalStorage:
    enable: true
  labelSelector: {}
status:
  phase: Ready
odfNS.yaml
# required: yes
# count: 1
---
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-storage
  annotations:
    workload.openshift.io/allowed: management
  labels:
    openshift.io/cluster-monitoring: "true"
odfOperGroup.yaml
# required: yes
# count: 1
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: openshift-storage-operatorgroup
  namespace: openshift-storage
spec:
  targetNamespaces:
    - openshift-storage
odfSubscription.yaml
# required: yes
# count: 1
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: odf-operator
  namespace: openshift-storage
spec:
  channel: "stable-4.14"
  name: odf-operator
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Automatic
status:
  state: AtLatestKnown

Networking reference YAML

Network.yaml
# required
# count: 1
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
  name: cluster
spec:
  defaultNetwork:
    ovnKubernetesConfig:
      gatewayConfig:
        routingViaHost: true
  # additional networks are optional and may alternatively be specified using NetworkAttachmentDefinition CRs
  additionalNetworks: [$additionalNetworks]
  # eg
  #- name: add-net-1
  #  namespace: app-ns-1
  #  rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "add-net-1", "plugins": [{"type": "macvlan", "master": "bond1", "ipam": {}}] }'
  #  type: Raw
  #- name: add-net-2
  #  namespace: app-ns-1
  #  rawCNIConfig: '{ "cniVersion": "0.4.0", "name": "add-net-2", "plugins": [ {"type": "macvlan", "master": "bond1", "mode": "private" },{ "type": "tuning", "name": "tuning-arp" }] }'
  #  type: Raw

  # Enable to use MultiNetworkPolicy CRs
  useMultiNetworkPolicy: true
networkAttachmentDefinition.yaml
# optional
# copies: 0-N
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: $name
  namespace: $ns
spec:
  nodeSelector:
    kubernetes.io/hostname: $nodeName
  config: $config
  #eg
  #config: '{
  #  "cniVersion": "0.3.1",
  #  "name": "external-169",
  #  "type": "vlan",
  #  "master": "ens8f0",
  #  "mode": "bridge",
  #  "vlanid": 169,
  #  "ipam": {
  #    "type": "static",
  #  }
  #}'
addr-pool.yaml
# required
# count: 1-N
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: $name # eg addresspool3
  namespace: metallb-system
  annotations:
    metallb.universe.tf/address-pool: $name # eg addresspool3
spec:
  ##############
  # Expected variation in this configuration
  addresses: [$pools]
  #- 3.3.3.0/24
  autoAssign: true
  ##############
bfd-profile.yaml
# required
# count: 1-N
apiVersion: metallb.io/v1beta1
kind: BFDProfile
metadata:
  name: bfdprofile
  namespace: metallb-system
spec:
  ################
  # These values may vary. Recommended values are included as default
  receiveInterval: 150 # default 300ms
  transmitInterval: 150 # default 300ms
  #echoInterval: 300 # default 50ms
  detectMultiplier: 10 # default 3
  echoMode: true
  passiveMode: true
  minimumTtl: 5 # default 254
  #
  ################
bgp-advr.yaml
# required
# count: 1-N
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
  name: $name # eg bgpadvertisement-1
  namespace: metallb-system
spec:
  ipAddressPools: [$pool]
  # eg:

  #  - addresspool3
  peers: [$peers]
  # eg:

  #    - peer-one
  # 
  communities: [$communities]
  # Note correlation with address pool, or Community
  # eg:

  #    - bgpcommunity
  #    - 65535:65282
  aggregationLength: 32
  aggregationLengthV6: 128
  localPref: 100
bgp-peer.yaml
# required
# count: 1-N
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
  name: $name
  namespace: metallb-system
spec:
  peerAddress: $ip # eg 192.168.1.2
  peerASN: $peerasn # eg 64501
  myASN: $myasn # eg 64500
  routerID: $id # eg 10.10.10.10
  bfdProfile: bfdprofile
  passwordSecret: {}
community.yaml
---
apiVersion: metallb.io/v1beta1
kind: Community
metadata:
  name: bgpcommunity
  namespace: metallb-system
spec:
  communities: [$comm]
metallb.yaml
# required
# count: 1
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
  name: metallb
  namespace: metallb-system
spec:
  nodeSelector:
    node-role.kubernetes.io/worker: ""
metallbNS.yaml
# required: yes
# count: 1
---
apiVersion: v1
kind: Namespace
metadata:
  name: metallb-system
  annotations:
    workload.openshift.io/allowed: management
  labels:
    openshift.io/cluster-monitoring: "true"
metallbOperGroup.yaml
# required: yes
# count: 1
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: metallb-operator
  namespace: metallb-system
metallbSubscription.yaml
# required: yes
# count: 1
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: metallb-operator-sub
  namespace: metallb-system
spec:
  channel: stable
  name: metallb-operator
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Automatic
status:
  state: AtLatestKnown
mc_rootless_pods_selinux.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-setsebool
spec:
  config:
    ignition:
      version: 3.2.0
    systemd:
      units:
        - contents: |
            [Unit]
            Description=Set SELinux boolean for tap cni plugin
            Before=kubelet.service

            [Service]
            Type=oneshot
            ExecStart=/sbin/setsebool container_use_devices=on
            RemainAfterExit=true

            [Install]
            WantedBy=multi-user.target graphical.target
          enabled: true
          name: setsebool.service
NMState.yaml
apiVersion: nmstate.io/v1
kind: NMState
metadata:
  name: nmstate
spec: {}
NMStateNS.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-nmstate
  annotations:
    workload.openshift.io/allowed: management
NMStateOperGroup.yaml
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: openshift-nmstate
  namespace: openshift-nmstate
spec:
  targetNamespaces:
    - openshift-nmstate
NMStateSubscription.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: kubernetes-nmstate-operator
  namespace: openshift-nmstate
spec:
  channel: "stable"
  name: kubernetes-nmstate-operator
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Automatic
status:
  state: AtLatestKnown
sriovNetwork.yaml
# optional (though expected for all)
# count: 0-N
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
  name: $name # eg sriov-network-abcd
  namespace: openshift-sriov-network-operator
spec:
  capabilities: "$capabilities" # eg '{"mac": true, "ips": true}'
  ipam: "$ipam" # eg '{ "type": "host-local", "subnet": "10.3.38.0/24" }'
  networkNamespace: $nns # eg cni-test
  resourceName: $resource # eg resourceTest
sriovNetworkNodePolicy.yaml
# optional (though expected in all deployments)
# count: 0-N
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  name: $name
  namespace: openshift-sriov-network-operator
spec: {} # $spec
# eg
#deviceType: netdevice
#nicSelector:
#  deviceID: "1593"
#  pfNames:
#  - ens8f0np0#0-9
#  rootDevices:
#  - 0000:d8:00.0
#  vendor: "8086"
#nodeSelector:
#  kubernetes.io/hostname: host.sample.lab
#numVfs: 20
#priority: 99
#excludeTopology: true
#resourceName: resourceNameABCD
SriovOperatorConfig.yaml
# required
# count: 1
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovOperatorConfig
metadata:
  name: default
  namespace: openshift-sriov-network-operator
spec:
  configDaemonNodeSelector:
    node-role.kubernetes.io/worker: ""
  enableInjector: true
  enableOperatorWebhook: true
  disableDrain: false
  logLevel: 2
SriovSubscription.yaml
# required: yes
# count: 1
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: sriov-network-operator-subscription
  namespace: openshift-sriov-network-operator
spec:
  channel: "stable"
  name: sriov-network-operator
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Automatic
status:
  state: AtLatestKnown
SriovSubscriptionNS.yaml
# required: yes
# count: 1
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-sriov-network-operator
  annotations:
    workload.openshift.io/allowed: management
SriovSubscriptionOperGroup.yaml
# required: yes
# count: 1
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: sriov-network-operators
  namespace: openshift-sriov-network-operator
spec:
  targetNamespaces:
    - openshift-sriov-network-operator

Scheduling reference YAML

nrop.yaml
# Optional
# count: 1
apiVersion: nodetopology.openshift.io/v1
kind: NUMAResourcesOperator
metadata:
  name: numaresourcesoperator
spec:
  nodeGroups:
    - config:
        # Periodic is the default setting
        infoRefreshMode: Periodic
      machineConfigPoolSelector:
        matchLabels:
          # This label must match the pool(s) you want to run NUMA-aligned workloads
          pools.operator.machineconfiguration.openshift.io/worker: ""
NROPSubscription.yaml
# required
# count: 1
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: numaresources-operator
  namespace: openshift-numaresources
spec:
  channel: "4.14"
  name: numaresources-operator
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
NROPSubscriptionNS.yaml
# required: yes
# count: 1
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-numaresources
  annotations:
    workload.openshift.io/allowed: management
NROPSubscriptionOperGroup.yaml
# required: yes
# count: 1
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: numaresources-operator
  namespace: openshift-numaresources
spec:
  targetNamespaces:
    - openshift-numaresources
sched.yaml
# Optional
# count: 1
apiVersion: nodetopology.openshift.io/v1
kind: NUMAResourcesScheduler
metadata:
  name: numaresourcesscheduler
spec:
  #cacheResyncPeriod: "0"
  # Image spec should be the latest for the release
  imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.14.0"
  #logLevel: "Trace"
  schedulerName: topo-aware-scheduler
Scheduler.yaml
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
  name: cluster
spec:
  # non-schedulable control plane is the default. This ensures
  # compliance.
  mastersSchedulable: false
  policy:
    name: ""

Other reference YAML

control-plane-load-kernel-modules.yaml
# optional
# count: 1
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: master
  name: 40-load-kernel-modules-control-plane
spec:
  config:
    # Release info found in https://github.com/coreos/butane/releases
    ignition:
      version: 3.2.0
    storage:
      files:
        - contents:
            source: data:,
          mode: 420
          overwrite: true
          path: /etc/modprobe.d/kernel-blacklist.conf
        - contents:
            source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo=
          mode: 420
          overwrite: true
          path: /etc/modules-load.d/kernel-load.conf
sctp_module_mc.yaml
# optional
# count: 1
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: load-sctp-module
spec:
  config:
    ignition:
      version: 2.2.0
    storage:
      files:
        - contents:
            source: data:,
            verification: {}
          filesystem: root
          mode: 420
          path: /etc/modprobe.d/sctp-blacklist.conf
        - contents:
            source: data:text/plain;charset=utf-8;base64,c2N0cA==
          filesystem: root
          mode: 420
          path: /etc/modules-load.d/sctp-load.conf
worker-load-kernel-modules.yaml
# optional
# count: 1
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 40-load-kernel-modules-worker
spec:
  config:
    # Release info found in https://github.com/coreos/butane/releases
    ignition:
      version: 3.2.0
    storage:
      files:
        - contents:
            source: data:,
          mode: 420
          overwrite: true
          path: /etc/modprobe.d/kernel-blacklist.conf
        - contents:
            source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo=
          mode: 420
          overwrite: true
          path: /etc/modules-load.d/kernel-load.conf
ClusterLogForwarder.yaml
# required
# count: 1
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
  name: instance
  namespace: openshift-logging
spec:
  outputs:
    - type: "kafka"
      name: kafka-open
      url: tcp://10.11.12.13:9092/test
  pipelines:
    - inputRefs:
        - infrastructure
        #- application
        - audit
      labels:
        label1: test1
        label2: test2
        label3: test3
        label4: test4
        label5: test5
      name: all-to-default
      outputRefs:
        - kafka-open
ClusterLogging.yaml
# required
# count: 1
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  name: instance
  namespace: openshift-logging
spec:
  collection:
    type: vector
  managementState: Managed
ClusterLogNS.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-logging
  annotations:
    workload.openshift.io/allowed: management
ClusterLogOperGroup.yaml
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: cluster-logging
  namespace: openshift-logging
spec:
  targetNamespaces:
    - openshift-logging
ClusterLogSubscription.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: cluster-logging
  namespace: openshift-logging
spec:
  channel: "stable"
  name: cluster-logging
  source: redhat-operators-disconnected
  sourceNamespace: openshift-marketplace
  installPlanApproval: Automatic
status:
  state: AtLatestKnown
catalog-source.yaml
# required
# count: 1..N
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: redhat-operators-disconnected
  namespace: openshift-marketplace
spec:
  displayName: Red Hat Disconnected Operators Catalog
  image: $imageUrl
  publisher: Red Hat
  sourceType: grpc
#  updateStrategy:
#    registryPoll:
#      interval: 1h
status:
  connectionState:
    lastObservedState: READY
icsp.yaml
# required
# count: 1
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
  name: disconnected-internal-icsp
spec:
  repositoryDigestMirrors: []
#    - $mirrors
operator-hub.yaml
# required
# count: 1
apiVersion: config.openshift.io/v1
kind: OperatorHub
metadata:
  name: cluster
spec:
  disableAllDefaultSources: true
monitoring-config-cm.yaml
# optional
# count: 1
---
apiVersion: v1
kind: configmap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    prometheusK8s:
      retention: 15d
      volumeClaimTemplate:
        spec:
          storageClassName: ocs-external-storagecluster-ceph-rbd
          resources:
            requests:
              storage: 100Gi
    alertmanagerMain:
      volumeClaimTemplate:
        spec:
          storageClassName: ocs-external-storagecluster-ceph-rbd
          resources:
            requests:
              storage: 20Gi
PerformanceProfile.yaml
# required
# count: 1
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
  name: $name
  annotations:
    # Some pods want the kernel stack to ignore IPv6 router Advertisement.
    kubeletconfig.experimental: |
      {"allowedUnsafeSysctls":["net.ipv6.conf.all.accept_ra"]}
spec:
  cpu:
    # node0 CPUs: 0-17,36-53
    # node1 CPUs: 18-34,54-71
    # siblings: (0,36), (1,37)...
    # we want to reserve the first Core of each NUMA socket
    #
    # no CPU left behind! all-cpus == isolated + reserved
    isolated: $isolated # eg 1-17,19-35,37-53,55-71
    reserved: $reserved # eg 0,18,36,54
  # Guaranteed QoS pods will disable IRQ balancing for cores allocated to the pod.
  # default value of globallyDisableIrqLoadBalancing is false
  globallyDisableIrqLoadBalancing: false
  hugepages:
    defaultHugepagesSize: 1G
    pages:
      # 32GB per numa node
      - count: $count # eg 64
        size: 1G
  machineConfigPoolSelector:
    # For SNO: machineconfiguration.openshift.io/role: 'master'
    pools.operator.machineconfiguration.openshift.io/worker: ''
  nodeSelector:
    # For SNO: node-role.kubernetes.io/master: ""
    node-role.kubernetes.io/worker: ""
  workloadHints:
    realTime: false
    highPowerConsumption: false
    perPodPowerManagement: true
  realTimeKernel:
    enabled: false
  numa:
    # All guaranteed QoS containers get resources from a single NUMA node
    topologyPolicy: "single-numa-node"
  net:
    userLevelNetworking: false