This is a cache of https://docs.openshift.com/container-platform/4.12/operators/admin/olm-adding-operators-to-cluster.html. It is a snapshot of the page at 2024-11-29T11:52:40.484+0000.
Adding Operators to a cluster - Administrator tasks | Operators | OpenShift Container Platform 4.12
×

Using Operator Lifecycle Manager (OLM), cluster administrators can install OLM-based Operators to an OpenShift Container Platform cluster.

For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation.

About Operator installation with OperatorHub

OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.

As a cluster administrator, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster.

During installation, you must determine the following initial settings for the Operator:

Installation Mode

Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces…​ to make the Operator available to all users and projects.

Update Channel

If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.

Approval Strategy

You can choose automatic or manual updates.

If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.

If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.

Additional resources

Installing from OperatorHub using the web console

You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console.

Prerequisites
  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure
  1. Navigate in the web console to the Operators → OperatorHub page.

  2. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator.

    You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.

  3. Select the Operator to display additional information.

    Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.

  4. Read the information about the Operator and click Install.

  5. On the Install Operator page:

    1. Select one of the following:

      • All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available.

      • A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.

    2. Select an Update Channel (if more than one is available).

    3. Select Automatic or Manual approval strategy, as described earlier.

  6. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.

    1. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.

      After approving on the Install Plan page, the subscription upgrade status moves to Up to date.

    2. If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.

  7. After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace.

    For the All namespaces…​ installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces.

    If it does not:

    1. Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace…​ installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.

Installing from OperatorHub using the CLI

Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object.

Prerequisites
  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

  • Install the oc command to your local system.

Procedure
  1. View the list of Operators available to the cluster from OperatorHub:

    $ oc get packagemanifests -n openshift-marketplace
    Example output
    NAME                               CATALOG               AGE
    3scale-operator                    Red Hat Operators     91m
    advanced-cluster-management        Red Hat Operators     91m
    amq7-cert-manager                  Red Hat Operators     91m
    ...
    couchbase-enterprise-certified     Certified Operators   91m
    crunchy-postgres-operator          Certified Operators   91m
    mongodb-enterprise                 Certified Operators   91m
    ...
    etcd                               Community Operators   91m
    jaeger                             Community Operators   91m
    kubefed                            Community Operators   91m
    ...

    Note the catalog for your desired Operator.

  2. Inspect your desired Operator to verify its supported install modes and available channels:

    $ oc describe packagemanifests <operator_name> -n openshift-marketplace
  3. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.

    The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces mode, the openshift-operators namespace already has the appropriate global-operators Operator group in place.

    However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one.

    • The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode.

    • You can only have one Operator group per namespace. For more information, see "Operator groups".

    1. Create an OperatorGroup object YAML file, for example operatorgroup.yaml:

      Example OperatorGroup object
      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: <operatorgroup_name>
        namespace: <namespace>
      spec:
        targetNamespaces:
        - <namespace>

      Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group:

      • <operatorgroup_name>-admin

      • <operatorgroup_name>-edit

      • <operatorgroup_name>-view

      When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster.

    2. Create the OperatorGroup object:

      $ oc apply -f operatorgroup.yaml
  4. Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml:

    Example Subscription object
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: <subscription_name>
      namespace: openshift-operators (1)
    spec:
      channel: <channel_name> (2)
      name: <operator_name> (3)
      source: redhat-operators (4)
      sourceNamespace: openshift-marketplace (5)
      config:
        env: (6)
        - name: ARGS
          value: "-v=10"
        envFrom: (7)
        - secretRef:
            name: license-secret
        volumes: (8)
        - name: <volume_name>
          configmap:
            name: <configmap_name>
        volumeMounts: (9)
        - mountPath: <directory_name>
          name: <volume_name>
        tolerations: (10)
        - operator: "Exists"
        resources: (11)
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        nodeSelector: (12)
          foo: bar
    1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage.
    2 Name of the channel to subscribe to.
    3 Name of the Operator to subscribe to.
    4 Name of the catalog source that provides the Operator.
    5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources.
    6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM.
    7 The envFrom parameter defines a list of sources to populate Environment Variables in the container.
    8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM.
    9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator.
    10 The tolerations parameter defines a list of Tolerations for the pod created by OLM.
    11 The resources parameter defines resource constraints for all the containers in the pod created by OLM.
    12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM.
  5. Create the Subscription object:

    $ oc apply -f sub.yaml

    At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

Additional resources

Installing a specific version of an Operator

You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object.

Prerequisites
  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions

  • OpenShift CLI (oc) installed

Procedure
  1. Create a Subscription object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV field. Set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog.

    For example, the following sub.yaml file can be used to install the Red Hat Quay Operator specifically to version 3.4.0:

    Subscription with a specific starting Operator version
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: quay-operator
      namespace: quay
    spec:
      channel: quay-v3.4
      installPlanApproval: Manual (1)
      name: quay-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      startingCSV: quay-operator.v3.4.0 (2)
    1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation.
    2 Set a specific version of an Operator CSV.
  2. Create the Subscription object:

    $ oc apply -f sub.yaml
  3. Manually approve the pending install plan to complete the Operator installation.

Preparing for multiple instances of an Operator for multitenant clusters

As a cluster administrator, you can add multiple instances of an Operator for use in multitenant clusters. This is an alternative solution to either using the standard All namespaces install mode, which can be considered to violate the principle of least privilege, or the Multinamespace mode, which is not widely adopted. For more information, see "Operators in multitenant clusters".

In the following procedure, the tenant is a user or group of users that share common access and privileges for a set of deployed workloads. The tenant Operator is the instance of an Operator that is intended for use by only that tenant.

Prerequisites
  • All instances of the Operator you want to install must be the same version across a given cluster.

    For more information on this and other limitations, see "Operators in multitenant clusters".

Procedure
  1. Before installing the Operator, create a namespace for the tenant Operator that is separate from the tenant’s namespace. For example, if the tenant’s namespace is team1, you might create a team1-operator namespace:

    1. Define a Namespace resource and save the YAML file, for example, team1-operator.yaml:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: team1-operator
    2. Create the namespace by running the following command:

      $ oc create -f team1-operator.yaml
  2. Create an Operator group for the tenant Operator scoped to the tenant’s namespace, with only that one namespace entry in the spec.targetNamespaces list:

    1. Define an OperatorGroup resource and save the YAML file, for example, team1-operatorgroup.yaml:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: team1-operatorgroup
        namespace: team1-operator
      spec:
        targetNamespaces:
        - team1 (1)
      1 Define only the tenant’s namespace in the spec.targetNamespaces list.
    2. Create the Operator group by running the following command:

      $ oc create -f team1-operatorgroup.yaml
Next steps
  • Install the Operator in the tenant Operator namespace. This task is more easily performed by using the OperatorHub in the web console instead of the CLI; for a detailed procedure, see Installing from OperatorHub using the web console.

    After completing the Operator installation, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator’s pod nor its service account are visible or usable by the tenant.

Additional resources

Installing global Operators in custom namespaces

When installing Operators with the OpenShift Container Platform web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace. This can cause issues related to shared install plans and update policies between all Operators in the namespace. For more details on these limitations, see "Multitenancy and Operator colocation".

As a cluster administrator, you can bypass this default behavior manually by creating a custom global namespace and using that namespace to install your individual or scoped set of Operators and their dependencies.

Procedure
  1. Before installing the Operator, create a namespace for the installation of your desired Operator. This installation namespace will become the custom global namespace:

    1. Define a Namespace resource and save the YAML file, for example, global-operators.yaml:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: global-operators
    2. Create the namespace by running the following command:

      $ oc create -f global-operators.yaml
  2. Create a custom global Operator group, which is an Operator group that watches all namespaces:

    1. Define an OperatorGroup resource and save the YAML file, for example, global-operatorgroup.yaml. Omit both the spec.selector and spec.targetNamespaces fields to make it a global Operator group, which selects all namespaces:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: global-operatorgroup
        namespace: global-operators

      The status.namespaces of a created global Operator group contains the empty string (""), which signals to a consuming Operator that it should watch all namespaces.

    2. Create the Operator group by running the following command:

      $ oc create -f global-operatorgroup.yaml
Next steps
  • Install the desired Operator in your custom global namespace. Because the web console does not populate the Installed Namespace menu during Operator installation with custom global namespaces, this task can only be performed with the OpenShift CLI (oc). For a detailed procedure, see Installing from OperatorHub using the CLI.

    When you initiate the Operator installation, if the Operator has dependencies, the dependencies are also automatically installed in the custom global namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans.

Pod placement of Operator workloads

By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes.

Controlling pod placement of Operator and Operand workloads has the following prerequisites:

  1. Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as node-role.kubernetes.io/app, that identifies the node or nodes. Otherwise, add a label, such as myoperator, by using a compute machine set or editing the node directly. You will use this label in a later step as the node selector on your project.

  2. If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a compute machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a myoperator:NoSchedule taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain.

  3. Create a project that is configured with a default node selector and, if you added a taint, a matching toleration.

At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios:

For Operator pods

Administrators can create a Subscription object in the project as described in the following section. As a result, the Operator pods are placed on the specified nodes.

For Operand pods

Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply.

Controlling where an Operator is installed

By default, when you install an Operator, OpenShift Container Platform installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes.

The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes:

  • If an Operator requires a particular platform, such as amd64 or arm64

  • If an Operator requires a particular operating system, such as Linux or Windows

  • If you want Operators that work together scheduled on the same host or on hosts located on the same rack

  • If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues

You can control where an Operator pod is installed by adding node affinity, pod affinity, or pod anti-affinity constraints to the Operator’s Subscription object. Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. Pod affinity enables you to ensure that related pods are scheduled to the same node. Pod anti-affinity allows you to prevent a pod from being scheduled on a node.

The following examples show how to use node affinity or pod anti-affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster:

Node affinity example that places the Operator pod on a specific node
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-custom-metrics-autoscaler-operator
  namespace: openshift-keda
spec:
  name: my-package
  source: my-operators
  sourceNamespace: operator-registries
  config:
    affinity:
      nodeAffinity: (1)
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - ip-10-0-163-94.us-west-2.compute.internal
#...
1 A node affinity that requires the Operator’s pod to be scheduled on a node named ip-10-0-163-94.us-west-2.compute.internal.
Node affinity example that places the Operator pod on a node with a specific platform
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-custom-metrics-autoscaler-operator
  namespace: openshift-keda
spec:
  name: my-package
  source: my-operators
  sourceNamespace: operator-registries
  config:
    affinity:
      nodeAffinity: (1)
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/arch
              operator: In
              values:
              - arm64
            - key: kubernetes.io/os
              operator: In
              values:
              - linux
#...
1 A node affinity that requires the Operator’s pod to be scheduled on a node with the kubernetes.io/arch=arm64 and kubernetes.io/os=linux labels.
Pod affinity example that places the Operator pod on one or more specific nodes
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-custom-metrics-autoscaler-operator
  namespace: openshift-keda
spec:
  name: my-package
  source: my-operators
  sourceNamespace: operator-registries
  config:
    affinity:
      podAffinity: (1)
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - test
          topologyKey: kubernetes.io/hostname
#...
1 A pod affinity that places the Operator’s pod on a node that has pods with the app=test label.
Pod anti-affinity example that prevents the Operator pod from one or more specific nodes
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshift-custom-metrics-autoscaler-operator
  namespace: openshift-keda
spec:
  name: my-package
  source: my-operators
  sourceNamespace: operator-registries
  config:
    affinity:
      podAntiAffinity: (1)
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: cpu
              operator: In
              values:
              - high
          topologyKey: kubernetes.io/hostname
#...
1 A pod anti-affinity that prevents the Operator’s pod from being scheduled on a node that has pods with the cpu=high label.
Procedure

To control the placement of an Operator pod, complete the following steps:

  1. Install the Operator as usual.

  2. If needed, ensure that your nodes are labeled to properly respond to the affinity.

  3. Edit the Operator Subscription object to add an affinity:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: openshift-custom-metrics-autoscaler-operator
      namespace: openshift-keda
    spec:
      name: my-package
      source: my-operators
      sourceNamespace: operator-registries
      config:
        affinity: (1)
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: In
                  values:
                  - ip-10-0-185-229.ec2.internal
    #...
    1 Add a nodeAffinity, podAffinity, or podAntiAffinity. See the Additional resources section that follows for information about creating the affinity.
Verification
  • To ensure that the pod is deployed on the specific node, run the following command:

    $ oc get pods -o wide
    Example output
    NAME                                                  READY   STATUS    RESTARTS   AGE   IP            NODE                           NOMINATED NODE   READINESS GATES
    custom-metrics-autoscaler-operator-5dcc45d656-bhshg   1/1     Running   0          50s   10.131.0.20   ip-10-0-185-229.ec2.internal   <none>           <none>