$ mkdir -p ./out
Use the following custom resources (CRs) to configure and deploy OpenShift Container Platform clusters with the telco core profile. Use the CRs to form the common baseline used in all the specific use models unless otherwise indicated.
You can extract the complete set of custom resources (CRs) for the telco core profile from the telco-core-rds-rhel9
container image.
The container image has both the required CRs, and the optional CRs, for the telco core profile.
You have installed podman
.
Extract the content from the telco-core-rds-rhel9
container image by running the following commands:
$ mkdir -p ./out
$ podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.17 | base64 -d | tar xv -C out
The out
directory has the following folder structure. You can view the telco core CRs in the out/telco-core-rds/
directory.
out/
└── telco-core-rds
├── configuration
│ └── reference-crs
│ ├── optional
│ │ ├── logging
│ │ ├── networking
│ │ │ └── multus
│ │ │ └── tap_cni
│ │ ├── other
│ │ └── tuning
│ └── required
│ ├── networking
│ │ ├── metallb
│ │ ├── multinetworkpolicy
│ │ └── sriov
│ ├── other
│ ├── performance
│ ├── scheduling
│ └── storage
│ └── odf-external
└── install
Component | Reference CR | Optional | New in this release |
---|---|---|---|
Baseline |
Yes |
No |
|
Baseline |
Yes |
No |
|
Load balancer |
No |
No |
|
Load balancer |
No |
No |
|
Load balancer |
No |
No |
|
Load balancer |
No |
No |
|
Load balancer |
No |
No |
|
Load balancer |
No |
No |
|
Load balancer |
No |
No |
|
Load balancer |
No |
No |
|
Load balancer |
No |
No |
|
Multus - Tap CNI for rootless DPDK pods |
No |
No |
|
NMState Operator |
No |
No |
|
NMState Operator |
No |
No |
|
NMState Operator |
No |
No |
|
NMState Operator |
No |
No |
|
SR-IOV Network Operator |
No |
No |
|
SR-IOV Network Operator |
No |
No |
|
SR-IOV Network Operator |
No |
No |
|
SR-IOV Network Operator |
No |
No |
|
SR-IOV Network Operator |
No |
No |
|
SR-IOV Network Operator |
No |
No |
Component | Reference CR | Optional | New in this release |
---|---|---|---|
Additional kernel modules |
Yes |
No |
|
Additional kernel modules |
Yes |
No |
|
Additional kernel modules |
Yes |
No |
|
Container mount namespace hiding |
No |
Yes |
|
Container mount namespace hiding |
No |
Yes |
|
Kdump enable |
No |
Yes |
|
Kdump enable |
No |
Yes |
Component | Reference CR | Optional | New in this release |
---|---|---|---|
Cluster logging |
Yes |
No |
|
Cluster logging |
Yes |
No |
|
Cluster logging |
Yes |
No |
|
Cluster logging |
Yes |
Yes |
|
Cluster logging |
Yes |
Yes |
|
Cluster logging |
Yes |
Yes |
|
Cluster logging |
Yes |
No |
|
Disconnected configuration |
No |
No |
|
Disconnected configuration |
No |
No |
|
Disconnected configuration |
No |
No |
|
Monitoring and observability |
Yes |
No |
|
Power management |
No |
No |
Component | Reference CR | Optional | New in this release |
---|---|---|---|
System reserved capacity |
Yes |
No |
Component | Reference CR | Optional | New in this release |
---|---|---|---|
NUMA-aware scheduler |
No |
No |
|
NUMA-aware scheduler |
No |
No |
|
NUMA-aware scheduler |
No |
No |
|
NUMA-aware scheduler |
No |
No |
|
NUMA-aware scheduler |
No |
No |
|
NUMA-aware scheduler |
No |
No |
Component | Reference CR | Optional | New in this release |
---|---|---|---|
External ODF configuration |
No |
No |
|
External ODF configuration |
No |
No |
|
External ODF configuration |
No |
No |
|
External ODF configuration |
No |
No |
|
External ODF configuration |
No |
No |
# required
# count: 1
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
gatewayConfig:
routingViaHost: true
# additional networks are optional and may alternatively be specified using NetworkAttachmentDefinition CRs
additionalNetworks: [$additionalNetworks]
# eg
#- name: add-net-1
# namespace: app-ns-1
# rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "add-net-1", "plugins": [{"type": "macvlan", "master": "bond1", "ipam": {}}] }'
# type: Raw
#- name: add-net-2
# namespace: app-ns-1
# rawCNIConfig: '{ "cniVersion": "0.4.0", "name": "add-net-2", "plugins": [ {"type": "macvlan", "master": "bond1", "mode": "private" },{ "type": "tuning", "name": "tuning-arp" }] }'
# type: Raw
# Enable to use MultiNetworkPolicy CRs
useMultiNetworkPolicy: true
# optional
# copies: 0-N
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: $name
namespace: $ns
spec:
nodeSelector:
kubernetes.io/hostname: $nodeName
config: $config
#eg
#config: '{
# "cniVersion": "0.3.1",
# "name": "external-169",
# "type": "vlan",
# "master": "ens8f0",
# "mode": "bridge",
# "vlanid": 169,
# "ipam": {
# "type": "static",
# }
#}'
# required
# count: 1-N
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: $name # eg addresspool3
namespace: metallb-system
spec:
##############
# Expected variation in this configuration
addresses: [$pools]
#- 3.3.3.0/24
autoAssign: true
##############
# required
# count: 1-N
apiVersion: metallb.io/v1beta1
kind: BFDProfile
metadata:
name: $name # e.g. bfdprofile
namespace: metallb-system
spec:
################
# These values may vary. Recommended values are included as default
receiveInterval: 150 # default 300ms
transmitInterval: 150 # default 300ms
#echoInterval: 300 # default 50ms
detectMultiplier: 10 # default 3
echoMode: true
passiveMode: true
minimumTtl: 5 # default 254
#
################
# required
# count: 1-N
apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
name: $name # eg bgpadvertisement-1
namespace: metallb-system
spec:
ipAddressPools: [$pool]
# eg:
# - addresspool3
peers: [$peers]
# eg:
# - peer-one
#
communities: [$communities]
# Note correlation with address pool, or Community
# eg:
# - bgpcommunity
# - 65535:65282
aggregationLength: 32
aggregationLengthV6: 128
localPref: 100
# required
# count: 1-N
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
name: $name
namespace: metallb-system
spec:
peerAddress: $ip # eg 192.168.1.2
peerASN: $peerasn # eg 64501
myASN: $myasn # eg 64500
routerID: $id # eg 10.10.10.10
bfdProfile: $bfdprofile # e.g. bfdprofile
passwordSecret: {}
---
apiVersion: metallb.io/v1beta1
kind: Community
metadata:
name: $name # e.g. bgpcommunity
namespace: metallb-system
spec:
communities: [$comm]
# required
# count: 1
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
spec: {}
#nodeSelector:
# node-role.kubernetes.io/worker: ""
# required: yes
# count: 1
---
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
annotations:
workload.openshift.io/allowed: management
labels:
openshift.io/cluster-monitoring: "true"
# required: yes
# count: 1
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: metallb-operator
namespace: metallb-system
# required: yes
# count: 1
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: metallb-operator-sub
namespace: metallb-system
spec:
channel: stable
name: metallb-operator
source: redhat-operators-disconnected
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic
status:
state: AtLatestKnown
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-setsebool
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- contents: |
[Unit]
Description=Set SELinux boolean for tap cni plugin
Before=kubelet.service
[Service]
Type=oneshot
ExecStart=/sbin/setsebool container_use_devices=on
RemainAfterExit=true
[Install]
WantedBy=multi-user.target graphical.target
enabled: true
name: setsebool.service
apiVersion: nmstate.io/v1
kind: NMState
metadata:
name: nmstate
spec: {}
apiVersion: v1
kind: Namespace
metadata:
name: openshift-nmstate
annotations:
workload.openshift.io/allowed: management
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-nmstate
namespace: openshift-nmstate
spec:
targetNamespaces:
- openshift-nmstate
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: kubernetes-nmstate-operator
namespace: openshift-nmstate
spec:
channel: "stable"
name: kubernetes-nmstate-operator
source: redhat-operators-disconnected
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic
status:
state: AtLatestKnown
# optional (though expected for all)
# count: 0-N
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: $name # eg sriov-network-abcd
namespace: openshift-sriov-network-operator
spec:
capabilities: "$capabilities" # eg '{"mac": true, "ips": true}'
ipam: "$ipam" # eg '{ "type": "host-local", "subnet": "10.3.38.0/24" }'
networkNamespace: $nns # eg cni-test
resourceName: $resource # eg resourceTest
# optional (though expected in all deployments)
# count: 0-N
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: $name
namespace: openshift-sriov-network-operator
spec: {} # $spec
# eg
#deviceType: netdevice
#nicSelector:
# deviceID: "1593"
# pfNames:
# - ens8f0np0#0-9
# rootDevices:
# - 0000:d8:00.0
# vendor: "8086"
#nodeSelector:
# kubernetes.io/hostname: host.sample.lab
#numVfs: 20
#priority: 99
#excludeTopology: true
#resourceName: resourceNameABCD
# required
# count: 1
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovOperatorConfig
metadata:
name: default
namespace: openshift-sriov-network-operator
spec:
configDaemonNodeSelector:
node-role.kubernetes.io/worker: ""
enableInjector: true
enableOperatorWebhook: true
disableDrain: false
logLevel: 2
# required: yes
# count: 1
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: sriov-network-operator-subscription
namespace: openshift-sriov-network-operator
spec:
channel: "stable"
name: sriov-network-operator
source: redhat-operators-disconnected
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic
status:
state: AtLatestKnown
# required: yes
# count: 1
apiVersion: v1
kind: Namespace
metadata:
name: openshift-sriov-network-operator
annotations:
workload.openshift.io/allowed: management
# required: yes
# count: 1
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: sriov-network-operators
namespace: openshift-sriov-network-operator
spec:
targetNamespaces:
- openshift-sriov-network-operator
# optional
# count: 1
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 40-load-kernel-modules-control-plane
spec:
config:
# Release info found in https://github.com/coreos/butane/releases
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:,
mode: 420
overwrite: true
path: /etc/modprobe.d/kernel-blacklist.conf
- contents:
source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo=
mode: 420
overwrite: true
path: /etc/modules-load.d/kernel-load.conf
# optional
# count: 1
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: load-sctp-module
spec:
config:
ignition:
version: 2.2.0
storage:
files:
- contents:
source: data:,
verification: {}
filesystem: root
mode: 420
path: /etc/modprobe.d/sctp-blacklist.conf
- contents:
source: data:text/plain;charset=utf-8;base64,c2N0cA==
filesystem: root
mode: 420
path: /etc/modules-load.d/sctp-load.conf
# optional
# count: 1
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 40-load-kernel-modules-worker
spec:
config:
# Release info found in https://github.com/coreos/butane/releases
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:,
mode: 420
overwrite: true
path: /etc/modprobe.d/kernel-blacklist.conf
- contents:
source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo=
mode: 420
overwrite: true
path: /etc/modules-load.d/kernel-load.conf
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 99-kubens-master
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: kubens.service
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-kubens-worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: kubens.service
# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 06-kdump-enable-master
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: kdump.service
kernelArguments:
- crashkernel=512M
# Automatically generated by extra-manifests-builder
# Do not make changes directly.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 06-kdump-enable-worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: kdump.service
kernelArguments:
- crashkernel=512M
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
# outputs: $outputs
# pipelines: $pipelines
serviceAccount:
name: collector
#apiVersion: "observability.openshift.io/v1"
#kind: ClusterLogForwarder
#metadata:
# name: instance
# namespace: openshift-logging
# spec:
# outputs:
# - type: "kafka"
# name: kafka-open
# # below url is an example
# kafka:
# url: tcp://10.11.12.13:9092/test
# filters:
# - name: test-labels
# type: openshiftLabels
# openshiftLabels:
# label1: test1
# label2: test2
# label3: test3
# label4: test4
# pipelines:
# - name: all-to-default
# inputRefs:
# - audit
# - infrastructure
# filterRefs:
# - test-labels
# outputRefs:
# - kafka-open
# serviceAccount:
# name: collector
---
apiVersion: v1
kind: Namespace
metadata:
name: openshift-logging
annotations:
workload.openshift.io/allowed: management
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: cluster-logging
namespace: openshift-logging
spec:
targetNamespaces:
- openshift-logging
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: collector
namespace: openshift-logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logcollector-audit-logs-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: collect-audit-logs
subjects:
- kind: ServiceAccount
name: collector
namespace: openshift-logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logcollector-infrastructure-logs-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: collect-infrastructure-logs
subjects:
- kind: ServiceAccount
name: collector
namespace: openshift-logging
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: cluster-logging
namespace: openshift-logging
spec:
channel: "stable-6.0"
name: cluster-logging
source: redhat-operators-disconnected
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic
status:
state: AtLatestKnown
# required
# count: 1..N
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: redhat-operators-disconnected
namespace: openshift-marketplace
spec:
displayName: Red Hat Disconnected Operators Catalog
image: $imageUrl
publisher: Red Hat
sourceType: grpc
# updateStrategy:
# registryPoll:
# interval: 1h
status:
connectionState:
lastObservedState: READY
# required
# count: 1
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
name: disconnected-internal-icsp
spec:
repositoryDigestMirrors: []
# - $mirrors
# required
# count: 1
apiVersion: config.openshift.io/v1
kind: OperatorHub
metadata:
name: cluster
spec:
disableAllDefaultSources: true
# optional
# count: 1
---
apiVersion: v1
kind: configmap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
retention: 15d
volumeClaimTemplate:
spec:
storageClassName: ocs-external-storagecluster-ceph-rbd
resources:
requests:
storage: 100Gi
alertmanagerMain:
volumeClaimTemplate:
spec:
storageClassName: ocs-external-storagecluster-ceph-rbd
resources:
requests:
storage: 20Gi
# required
# count: 1
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: $name
annotations:
# Some pods want the kernel stack to ignore IPv6 router Advertisement.
kubeletconfig.experimental: |
{"allowedUnsafeSysctls":["net.ipv6.conf.all.accept_ra"]}
spec:
cpu:
# node0 CPUs: 0-17,36-53
# node1 CPUs: 18-34,54-71
# siblings: (0,36), (1,37)...
# we want to reserve the first Core of each NUMA socket
#
# no CPU left behind! all-cpus == isolated + reserved
isolated: $isolated # eg 1-17,19-35,37-53,55-71
reserved: $reserved # eg 0,18,36,54
# Guaranteed QoS pods will disable IRQ balancing for cores allocated to the pod.
# default value of globallyDisableIrqLoadBalancing is false
globallyDisableIrqLoadBalancing: false
hugepages:
defaultHugepagesSize: 1G
pages:
# 32GB per numa node
- count: $count # eg 64
size: 1G
#machineConfigPoolSelector: {}
# pools.operator.machineconfiguration.openshift.io/worker: ''
nodeSelector: {}
#node-role.kubernetes.io/worker: ""
workloadHints:
realTime: false
highPowerConsumption: false
perPodPowerManagement: true
realTimeKernel:
enabled: false
numa:
# All guaranteed QoS containers get resources from a single NUMA node
topologyPolicy: "single-numa-node"
net:
userLevelNetworking: false
# optional
# count: 1
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: autosizing-master
spec:
autoSizingReserved: true
machineConfigPoolSelector:
matchLabels:
pools.operator.machineconfiguration.openshift.io/master: ""
# Optional
# count: 1
apiVersion: nodetopology.openshift.io/v1
kind: NUMAResourcesOperator
metadata:
name: numaresourcesoperator
spec:
nodeGroups: []
#- config:
# # Periodic is the default setting
# infoRefreshMode: Periodic
# machineConfigPoolSelector:
# matchLabels:
# # This label must match the pool(s) you want to run NUMA-aligned workloads
# pools.operator.machineconfiguration.openshift.io/worker: ""
# required
# count: 1
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: numaresources-operator
namespace: openshift-numaresources
spec:
channel: "4.17"
name: numaresources-operator
source: redhat-operators-disconnected
sourceNamespace: openshift-marketplace
status:
state: AtLatestKnown
# required: yes
# count: 1
apiVersion: v1
kind: Namespace
metadata:
name: openshift-numaresources
annotations:
workload.openshift.io/allowed: management
# required: yes
# count: 1
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: numaresources-operator
namespace: openshift-numaresources
spec:
targetNamespaces:
- openshift-numaresources
# Optional
# count: 1
apiVersion: nodetopology.openshift.io/v1
kind: NUMAResourcesScheduler
metadata:
name: numaresourcesscheduler
spec:
#cacheResyncPeriod: "0"
# Image spec should be the latest for the release
imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.17.0"
#logLevel: "Trace"
schedulerName: topo-aware-scheduler
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
name: cluster
spec:
# non-schedulable control plane is the default. This ensures
# compliance.
mastersSchedulable: false
policy:
name: ""
# required
# count: 1
---
apiVersion: v1
kind: Secret
metadata:
name: rook-ceph-external-cluster-details
namespace: openshift-storage
type: Opaque
data:
# encoded content has been made generic
external_cluster_details: eyJuYW1lIjoicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLCJraW5kIjoiQ29uZmlnTWFwIiwiZGF0YSI6eyJkYXRhIjoiY2VwaHVzYTE9MS4yLjMuNDo2Nzg5IiwibWF4TW9uSWQiOiIwIiwibWFwcGluZyI6Int9In19LHsibmFtZSI6InJvb2stY2VwaC1tb24iLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJhZG1pbi1zZWNyZXQiOiJhZG1pbi1zZWNyZXQiLCJmc2lkIjoiMTExMTExMTEtMTExMS0xMTExLTExMTEtMTExMTExMTExMTExIiwibW9uLXNlY3JldCI6Im1vbi1zZWNyZXQifX0seyJuYW1lIjoicm9vay1jZXBoLW9wZXJhdG9yLWNyZWRzIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsidXNlcklEIjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoibW9uaXRvcmluZy1lbmRwb2ludCIsImtpbmQiOiJDZXBoQ2x1c3RlciIsImRhdGEiOnsiTW9uaXRvcmluZ0VuZHBvaW50IjoiMS4yLjMuNCwxLjIuMy4zLDEuMi4zLjIiLCJNb25pdG9yaW5nUG9ydCI6IjkyODMifX0seyJuYW1lIjoiY2VwaC1yYmQiLCJraW5kIjoiU3RvcmFnZUNsYXNzIiwiZGF0YSI6eyJwb29sIjoib2RmX3Bvb2wifX0seyJuYW1lIjoicm9vay1jc2ktcmJkLW5vZGUiLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJ1c2VySUQiOiJjc2ktcmJkLW5vZGUiLCJ1c2VyS2V5IjoiIn19LHsibmFtZSI6InJvb2stY3NpLXJiZC1wcm92aXNpb25lciIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7InVzZXJJRCI6ImNzaS1yYmQtcHJvdmlzaW9uZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoicm9vay1jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsiYWRtaW5JRCI6ImNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCJhZG1pbktleSI6IiJ9fSx7Im5hbWUiOiJyb29rLWNzaS1jZXBoZnMtbm9kZSIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7ImFkbWluSUQiOiJjc2ktY2VwaGZzLW5vZGUiLCJhZG1pbktleSI6ImMyVmpjbVYwIn19LHsibmFtZSI6ImNlcGhmcyIsImtpbmQiOiJTdG9yYWdlQ2xhc3MiLCJkYXRhIjp7ImZzTmFtZSI6ImNlcGhmcyIsInBvb2wiOiJtYW5pbGFfZGF0YSJ9fQ==
# required
# count: 1
---
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-external-storagecluster
namespace: openshift-storage
spec:
externalStorage:
enable: true
labelSelector: {}
status:
phase: Ready
# required: yes
# count: 1
---
apiVersion: v1
kind: Namespace
metadata:
name: openshift-storage
annotations:
workload.openshift.io/allowed: management
labels:
openshift.io/cluster-monitoring: "true"
# required: yes
# count: 1
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-storage-operatorgroup
namespace: openshift-storage
spec:
targetNamespaces:
- openshift-storage
# required: yes
# count: 1
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: odf-operator
namespace: openshift-storage
spec:
channel: "stable-4.14"
name: odf-operator
source: redhat-operators-disconnected
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic
status:
state: AtLatestKnown