This is a cache of https://docs.openshift.com/container-platform/4.5/operators/operator_sdk/osdk-generating-csvs.html. It is a snapshot of the page at 2024-11-23T00:22:57.038+0000.
Generating a cluster <strong>service</strong> version (CSV) - Developing Operators | Operators | OpenShift Container Platform 4.5
×

A cluster service version (CSV), defined by a ClusterserviceVersion object, is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on.

The Operator SDK includes the generate csv subcommand to generate a CSV for the current Operator project customized using information contained in manually-defined YAML manifests and Operator source files.

A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.

The CSV version is the same as the Operator version, and a new CSV is generated when upgrading Operator versions. Operator authors can use the --csv-version flag to have their Operator state encapsulated in a CSV with the supplied semantic version:

$ operator-sdk generate csv --csv-version <version>

This action is idempotent and only updates the CSV file when a new version is supplied, or a YAML manifest or source file is changed. Operator authors should not have to directly modify most fields in a CSV manifest. Those that require modification are defined in this guide. For example, the CSV version must be included in metadata.name.

How CSV generation works

The deploy/ directory of an Operator project is the standard location for all manifests required to deploy an Operator. The Operator SDK can use data from manifests in deploy/ to write a cluster service version (CSV).

The following command:

$ operator-sdk generate csv --csv-version <version>

writes a CSV YAML file to the deploy/olm-catalog/ directory by default.

Exactly three types of manifests are required to generate a CSV:

  • operator.yaml

  • *_{crd,cr}.yaml

  • RBAC role files, for example role.yaml

Operator authors may have different versioning requirements for these files and can configure which specific files are included in the deploy/olm-catalog/csv-config.yaml file.

Workflow

Depending on whether an existing CSV is detected, and assuming all configuration defaults are used, the generate csv subcommand either:

  • Creates a new CSV, with the same location and naming convention as exists currently, using available data in YAML manifests and source files.

    1. The update mechanism checks for an existing CSV in deploy/. When one is not found, it creates a ClusterserviceVersion object, referred to here as a cache, and populates fields easily derived from Operator metadata, such as Kubernetes API ObjectMeta.

    2. The update mechanism searches deploy/ for manifests that contain data a CSV uses, such as a Deployment resource, and sets the appropriate CSV fields in the cache with this data.

    3. After the search completes, every cache field populated is written back to a CSV YAML file.

or:

  • Updates an existing CSV at the currently pre-defined location, using available data in YAML manifests and source files.

    1. The update mechanism checks for an existing CSV in deploy/. When one is found, the CSV YAML file contents are marshaled into a CSV cache.

    2. The update mechanism searches deploy/ for manifests that contain data a CSV uses, such as a Deployment resource, and sets the appropriate CSV fields in the cache with this data.

    3. After the search completes, every cache field populated is written back to a CSV YAML file.

Individual YAML fields are overwritten and not the entire file, as descriptions and other non-generated parts of a CSV should be preserved.

CSV composition configuration

Operator authors can configure CSV composition by populating several fields in the deploy/olm-catalog/csv-config.yaml file:

Field Description

operator-path (string)

The Operator resource manifest file path. Default: deploy/operator.yaml.

crd-cr-path-list (string(, string)*)

A list of CRD and CR manifest file paths. Default: [deploy/crds/*_{crd,cr}.yaml].

rbac-path-list (string(, string)*)

A list of RBAC role manifest file paths. Default: [deploy/role.yaml].

Manually-defined CSV fields

Many CSV fields cannot be populated using generated, generic manifests that are not specific to Operator SDK. These fields are mostly human-written, English metadata about the Operator and various custom resource definitions (CRDs).

Operator authors must directly modify their cluster service version (CSV) YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning during CSV generation when a lack of data in any of the required fields is detected.

Table 1. Required
Field Description

metadata.name

A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example app-operator.v0.1.1.

metadata.capabilities

The capability level according to the Operator maturity model. Options include Basic Install, Seamless Upgrades, Full Lifecycle, Deep Insights, and Auto Pilot.

spec.displayName

A public name to identify the Operator.

spec.description

A short description of the functionality of the Operator.

spec.keywords

Keywords describing the Operator.

spec.maintainers

Human or organizational entities maintaining the Operator, with a name and email.

spec.provider

The provider of the Operator (usually an organization), with a name.

spec.labels

Key-value pairs to be used by Operator internals.

spec.version

Semantic version of the Operator, for example 0.1.1.

spec.customresourcedefinitions

Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in deploy/. However, several fields not in the CRD manifest spec require user input:

  • description: description of the CRD.

  • resources: any Kubernetes resources leveraged by the CRD, for example Pod and StatefulSet objects.

  • specDescriptors: UI hints for inputs and outputs of the Operator.

Table 2. Optional
Field Description

spec.replaces

The name of the CSV being replaced by this CSV.

spec.links

URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a name and url.

spec.selector

Selectors by which the Operator can pair resources in a cluster.

spec.icon

A base64-encoded icon unique to the Operator, set in a base64data field with a mediatype.

spec.maturity

The level of maturity the software has achieved at this version. Options include planning, pre-alpha, alpha, beta, stable, mature, inactive, and deprecated.

Further details on what data each field above should hold are found in the CSV spec.

Several YAML fields currently requiring user intervention can potentially be parsed from Operator code.

Additional resources

Generating a CSV

Prerequisites
  • An Operator project generated using the Operator SDK

Procedure
  1. In your Operator project, configure your CSV composition by modifying the deploy/olm-catalog/csv-config.yaml file, if desired.

  2. Generate the CSV:

    $ operator-sdk generate csv --csv-version <version>
  3. In the new CSV generated in the deploy/olm-catalog/ directory, ensure all required, manually-defined fields are set appropriately.

Enabling your Operator for restricted network environments

As an Operator author, your CSV must meet the following additional requirements for your Operator to run properly in a restricted network environment:

  • List any related images, or other container images that your Operator might require to perform their functions.

  • Reference all specified images by a digest (SHA) and not by a tag.

You must use SHA references to related images in two places in the Operator’s CSV:

  • in spec.relatedImages:

    ...
    spec:
      relatedImages: (1)
        - name: etcd-operator (2)
          image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 (3)
        - name: etcd-image
          image: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68
    ...
    1 Create a relatedImages section and set the list of related images.
    2 Specify a unique identifier for the image.
    3 Specify each image by a digest (SHA), not by an image tag.
  • in the env section of the Operators Deployments when declaring environment variables that inject the image that the Operator should use:

    spec:
      install:
        spec:
          deployments:
          - name: etcd-operator-v3.1.1
            spec:
              replicas: 1
              selector:
                matchLabels:
                  name: etcd-operator
              strategy:
                type: Recreate
              template:
                metadata:
                  labels:
                    name: etcd-operator
                spec:
                  containers:
                  - args:
                    - /opt/etcd/bin/etcd_operator_run.sh
                    env:
                    - name: WATCH_NAMESPACE
                      valueFrom:
                        fieldRef:
                          fieldPath: metadata.annotations['olm.targetNamespaces']
                    - name: ETCD_OPERATOR_DEFAULT_ETCD_IMAGE (1)
                      value: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68 (2)
                    - name: ETCD_LOG_LEVEL
                      value: INFO
                    image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 (3)
                    imagePullPolicy: IfNotPresent
                    livenessProbe:
                      httpGet:
                        path: /healthy
                        port: 8080
                      initialDelaySeconds: 10
                      periodSeconds: 30
                    name: etcd-operator
                    readinessProbe:
                      httpGet:
                        path: /ready
                        port: 8080
                      initialDelaySeconds: 10
                      periodSeconds: 30
                    resources: {}
                  serviceAccountName: etcd-operator
        strategy: deployment
    1 Inject the images referenced by the Operator by using environment variables.
    2 Specify each image by a digest (SHA), not by an image tag.
    3 Also reference the Operator container image by a digest (SHA), not by an image tag.

    When configuring probes, the timeoutSeconds value must be lower than the periodSeconds value. The timeoutSeconds default value is 1. The periodSeconds default value is 10.

  • Look for the Disconnected annotation, which indicates that the Operator works in a disconnected environment:

    metadata:
      annotations:
        operators.openshift.io/infrastructure-features: '["Disconnected"]'

    Operators can be filtered in OperatorHub by this infrastructure feature.

Enabling your Operator for multiple architectures and operating systems

Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.

If your Operator supports variants other than AMD64 and Linux, you can add labels to the cluster service version (CSV) that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:

labels:
    operatorframework.io/arch.<arch>: supported (1)
    operatorframework.io/os.<os>: supported (2)
1 Set <arch> to a supported string.
2 Set <os> to a supported string.

Only the labels on the channel head of the default channel are considered for filtering package manifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the PackageManifest API.

If a CSV does not include an os label, it is treated as if it has the following Linux support label by default:

labels:
    operatorframework.io/os.linux: supported

If a CSV does not include an arch label, it is treated as if it has the following AMD64 support label by default:

labels:
    operatorframework.io/arch.amd64: supported

If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.

Prerequisites
  • An Operator project with a CSV.

  • To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.

  • For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.

Procedure
  • Add a label in the metadata.labels of your CSV for each supported architecture and operating system that your Operator supports:

    labels:
      operatorframework.io/arch.s390x: supported
      operatorframework.io/os.zos: supported
      operatorframework.io/os.linux: supported (1)
      operatorframework.io/arch.amd64: supported (1)
    1 After you add a new architecture or operating system, you must also now include the default os.linux and arch.amd64 variants explicitly.
Additional resources

Architecture and operating system support for Operators

The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:

Table 3. Architectures supported on OpenShift Container Platform
Architecture String

AMD64

amd64

64-bit PowerPC little-endian

ppc64le

IBM Z

s390x

Table 4. Operating systems supported on OpenShift Container Platform
Operating system String

Linux

linux

z/OS

zos

Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems.

Setting a suggested namespace

Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, in order to work properly. If resolved from a subscription, Operator Lifecycle Manager (OLM) defaults the namespaced resources of an Operator to the namespace of its subscription.

As an Operator author, you can instead express a desired target namespace as part of your cluster service version (CSV) to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.

Procedure
  • In your CSV, set the operatorframework.io/suggested-namespace annotation to your suggested namespace:

    metadata:
      annotations:
        operatorframework.io/suggested-namespace: <namespace> (1)
    1 Set your suggested namespace.

Understanding your custom resource definitions (CRDs)

There are two types of custom resource definitions (CRDs) that your Operator can use: ones that are owned by it and ones that it depends on, which are required.

Owned CRDs

The custom resource definitions (CRDs) owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.

It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of replica sets in another. Each one should be listed out in the CSV file.

Table 5. Owned CRD fields
Field Description Required/optional

Name

The full name of your CRD.

Required

Version

The version of that object API.

Required

Kind

The machine readable name of your CRD.

Required

DisplayName

A human readable version of your CRD name, for example MongoDB Standalone.

Required

Description

A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD.

Required

Group

The API group that this CRD belongs to, for example database.example.com.

Optional

Resources

Your CRDs own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

These descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a secret or config map that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs.

There are three types of descriptors:

  • SpecDescriptors: A reference to fields in the spec block of an object.

  • StatusDescriptors: A reference to fields in the status block of an object.

  • ActionDescriptors: A reference to actions that can be performed on an object.

All descriptors accept the following fields:

  • DisplayName: A human readable name for the Spec, Status, or Action.

  • Description: A short description of the Spec, Status, or Action and how it is used by the Operator.

  • Path: A dot-delimited path of the field on the object that this descriptor describes.

  • X-Descriptors: Used to determine which "capabilities" this descriptor has and which UI component to use. See the openshift/console project for a canonical list of React UI X-Descriptors for OpenShift Container Platform.

Also see the openshift/console project for more information on Descriptors in general.

Optional

The following example depicts a MongoDB Standalone CRD that requires some user input in the form of a secret and config map, and orchestrates services, stateful sets, pods and config maps:

Example owned CRD
      - displayName: MongoDB Standalone
        group: mongodb.com
        kind: MongoDbStandalone
        name: mongodbstandalones.mongodb.com
        resources:
          - kind: service
            name: ''
            version: v1
          - kind: StatefulSet
            name: ''
            version: v1beta2
          - kind: Pod
            name: ''
            version: v1
          - kind: ConfigMap
            name: ''
            version: v1
        specDescriptors:
          - description: Credentials for Ops Manager or Cloud Manager.
            displayName: Credentials
            path: credentials
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret'
          - description: Project this deployment belongs to.
            displayName: Project
            path: project
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap'
          - description: MongoDB version to be installed.
            displayName: Version
            path: version
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:label'
        statusDescriptors:
          - description: The status of each of the pods for the MongoDB cluster.
            displayName: Pod Status
            path: pods
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:podStatuses'
        version: v1
        description: >-
          MongoDB Deployment consisting of only one host. No replication of
          data.

Required CRDs

Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.

An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.

Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a service account created for each Operator to create, watch, and modify the Kubernetes resources required.

Table 6. Required CRD fields
Field Description Required/optional

Name

The full name of the CRD you require.

Required

Version

The version of that object API.

Required

Kind

The Kubernetes object kind.

Required

DisplayName

A human readable version of the CRD.

Required

Description

A summary of how the component fits in your larger architecture.

Required

Example required CRD
    required:
    - name: etcdclusters.etcd.database.coreos.com
      version: v1beta2
      kind: EtcdCluster
      displayName: etcd Cluster
      description: Represents a cluster of etcd nodes.

CRD templates

Users of your Operator must be made aware of which options are required versus optional. You can provide templates for each of your custom resource definitions (CRDs) with a minimum set of configuration as an annotation named alm-examples. Compatible UIs will pre-fill this template for users to further customize.

The annotation consists of a list of the kind, for example, the CRD name and the corresponding metadata and spec of the Kubernetes object.

The following full example provides templates for EtcdCluster, EtcdBackup and EtcdRestore:

metadata:
  annotations:
    alm-examples: >-
      [{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]

Hiding internal objects

It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a Replication CRD that is created whenever a user creates a Database object with replication: true.

As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the operators.operatorframework.io/internal-objects annotation to the cluster service version (CSV) of your Operator.

Procedure
  1. Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or spec block of your CR, if applicable to your Operator.

  2. Add the operators.operatorframework.io/internal-objects annotation to the CSV of your Operator to specify any internal objects to hide in the user interface:

    Internal object annotation
    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterserviceVersion
    metadata:
      name: my-operator-v1.2.3
      annotations:
        operators.operatorframework.io/internal-objects: '["my.internal.crd1.io","my.internal.crd2.io"]' (1)
    ...
    1 Set any internal CRDs as an array of strings.

Understanding your API services

As with CRDs, there are two types of API services that your Operator may use: owned and required.

Owned API services

When a CSV owns an API service, it is responsible for describing the deployment of the extension api-server that backs it and the group/version/kind (GVK) it provides.

An API service is uniquely identified by the group/version it provides and can be listed multiple times to denote the different kinds it is expected to provide.

Table 7. Owned API service fields
Field Description Required/optional

Group

Group that the API service provides, for example database.example.com.

Required

Version

Version of the API service, for example v1alpha1.

Required

Kind

A kind that the API service is expected to provide.

Required

Name

The plural name for the API service provided.

Required

DeploymentName

Name of the deployment defined by your CSV that corresponds to your API service (required for owned API services). During the CSV pending phase, the OLM Operator searches the InstallStrategy of your CSV for a Deployment spec with a matching name, and if not found, does not transition the CSV to the "Install Ready" phase.

Required

DisplayName

A human readable version of your API service name, for example MongoDB Standalone.

Required

Description

A short description of how this API service is used by the Operator or a description of the functionality provided by the API service.

Required

Resources

Your API services own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

Essentially the same as for owned CRDs.

Optional

API service resource creation

Operator Lifecycle Manager (OLM) is responsible for creating or replacing the service and API service resources for each unique owned API service:

  • service pod selectors are copied from the CSV deployment matching the DeploymentName field of the API service description.

  • A new CA key/certificate pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective API service resource.

API service serving certificates

OLM handles generating a serving key/certificate pair whenever an owned API service is being installed. The serving certificate has a common name (CN) containing the host name of the generated service resource and is signed by the private key of the CA bundle embedded in the corresponding API service resource.

The certificate is stored as a type kubernetes.io/tls secret in the deployment namespace, and a volume named apiservice-cert is automatically appended to the volumes section of the deployment in the CSV matching the DeploymentName field of the API service description.

If one does not already exist, a volume mount with a matching name is also appended to all containers of that deployment. This allows users to define a volume mount with the expected name to accommodate any custom path requirements. The path of the generated volume mount defaults to /apiserver.local.config/certificates and any existing volume mounts with the same path are replaced.

Required API services

OLM ensures all required CSVs have an API service that is available and all expected GVKs are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by API services it does not own.

Table 8. Required API service fields
Field Description Required/optional

Group

Group that the API service provides, for example database.example.com.

Required

Version

Version of the API service, for example v1alpha1.

Required

Kind

A kind that the API service is expected to provide.

Required

DisplayName

A human readable version of your API service name, for example MongoDB Standalone.

Required

Description

A short description of how this API service is used by the Operator or a description of the functionality provided by the API service.

Required