This is a cache of https://docs.openshift.com/container-platform/4.7/operators/operator_sdk/helm/osdk-helm-tutorial.html. It is a snapshot of the page at 2024-11-22T21:54:07.799+0000.
Tutorial - Developing Operators | Operators | OpenShift Container Platform 4.7
×

Operator developers can take advantage of Helm support in the Operator SDK to build an example Helm-based Operator for Nginx and manage its lifecycle. This tutorial walks through the following process:

  • Create a Nginx deployment

  • Ensure that the deployment size is the same as specified by the Nginx custom resource (CR) spec

  • Update the Nginx CR status using the status writer with the names of the nginx pods

This process is accomplished using two centerpieces of the Operator Framework:

Operator SDK

The operator-sdk CLI tool and controller-runtime library API

Operator Lifecycle Manager (OLM)

Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster

This tutorial goes into greater detail than Getting started with Operator SDK for Helm-based Operators.

Prerequisites

  • Operator SDK CLI installed

  • OpenShift CLI (oc) v4.7+ installed

  • Logged into an OpenShift Container Platform 4.7 cluster with oc with an account that has cluster-admin permissions

  • To allow the cluster pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret.

Creating a project

Use the Operator SDK CLI to create a project called nginx-operator.

Procedure
  1. Create a directory for the project:

    $ mkdir -p $HOME/projects/nginx-operator
  2. Change to the directory:

    $ cd $HOME/projects/nginx-operator
  3. Run the operator-sdk init command with the helm plug-in to initialize the project:

    $ operator-sdk init \
        --plugins=helm \
        --domain=example.com \
        --group=demo \
        --version=v1 \
        --kind=Nginx

    By default, the helm plug-in initializes a project using a boilerplate Helm chart. You can use additional flags, such as the --helm-chart flag, to initialize a project using an existing Helm chart.

    The init command creates the nginx-operator project specifically for watching a resource with API version example.com/v1 and kind Nginx.

  4. For Helm-based projects, the init command generates the RBAC rules in the config/rbac/role.yaml file based on the resources that would be deployed by the default manifest for the chart. Verify that the rules generated in this file meet the permission requirements of the Operator.

Existing Helm charts

Instead of creating your project with a boilerplate Helm chart, you can alternatively use an existing chart, either from your local file system or a remote chart repository, by using the following flags:

  • --helm-chart

  • --helm-chart-repo

  • --helm-chart-version

If the --helm-chart flag is specified, the --group, --version, and --kind flags become optional. If left unset, the following default values are used:

Flag Value

--domain

my.domain

--group

charts

--version

v1

--kind

Deduced from the specified chart

If the --helm-chart flag specifies a local chart archive, for example example-chart-1.2.0.tgz, or directory, the chart is validated and unpacked or copied into the project. Otherwise, the Operator SDK attempts to fetch the chart from a remote repository.

If a custom repository URL is not specified by the --helm-chart-repo flag, the following chart reference formats are supported:

Format Description

<repo_name>/<chart_name>

Fetch the Helm chart named <chart_name> from the helm chart repository named <repo_name>, as specified in the $HELM_HOME/repositories/repositories.yaml file. Use the helm repo add command to configure this file.

<url>

Fetch the Helm chart archive at the specified URL.

If a custom repository URL is specified by --helm-chart-repo, the following chart reference format is supported:

Format Description

<chart_name>

Fetch the Helm chart named <chart_name> in the Helm chart repository specified by the --helm-chart-repo URL value.

If the --helm-chart-version flag is unset, the Operator SDK fetches the latest available version of the Helm chart. Otherwise, it fetches the specified version. The optional --helm-chart-version flag is not used when the chart specified with the --helm-chart flag refers to a specific version, for example when it is a local path or a URL.

For more details and examples, run:

$ operator-sdk init --plugins helm --help

PROJECT file

Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Helm. For example:

domain: example.com
layout: helm.sdk.operatorframework.io/v1
projectName: helm-operator
resources:
- group: demo
  kind: Nginx
  version: v1
version: 3-alpha

Understanding the Operator logic

For this example, the nginx-operator project executes the following reconciliation logic for each Nginx custom resource (CR):

  • Create an Nginx deployment if it does not exist.

  • Create an Nginx service if it does not exist.

  • Create an Nginx ingress if it is enabled and does not exist.

  • Ensure that the deployment, service, and optional ingress match the desired configuration as specified by the Nginx CR, for example the replica count, image, and service type.

By default, the nginx-operator project watches Nginx resource events as shown in the watches.yaml file and executes Helm releases using the specified chart:

# Use the 'create api' subcommand to add watches to this file.
- group: demo
  version: v1
  kind: Nginx
  chart: helm-charts/nginx
# +kubebuilder:scaffold:watch

Sample Helm chart

When a Helm Operator project is created, the Operator SDK creates a sample Helm chart that contains a set of templates for a simple Nginx release.

For this example, templates are available for deployment, service, and ingress resources, along with a NOTES.txt template, which Helm chart developers use to convey helpful information about a release.

If you are not already familiar with Helm charts, review the Helm developer documentation.

Modifying the custom resource spec

Helm uses a concept called values to provide customizations to the defaults of a Helm chart, which are defined in the values.yaml file.

You can override these defaults by setting the desired values in the custom resource (CR) spec. You can use the number of replicas as an example.

Procedure
  1. The helm-charts/nginx/values.yaml file has a value called replicaCount set to 1 by default. To have two Nginx instances in your deployment, your CR spec must contain replicaCount: 2.

    Edit the config/samples/demo_v1_nginx.yaml file to set replicaCount: 2:

    apiVersion: demo.example.com/v1
    kind: Nginx
    metadata:
      name: nginx-sample
    ...
    spec:
    ...
      replicaCount: 2
  2. Similarly, the default service port is set to 80. To use 8080, edit the config/samples/demo_v1_nginx.yaml file to set spec.port: 8080,which adds the service port override:

    apiVersion: demo.example.com/v1
    kind: Nginx
    metadata:
      name: nginx-sample
    spec:
      replicaCount: 2
      service:
        port: 8080

The Helm Operator applies the entire spec as if it was the contents of a values file, just like the helm install -f ./overrides.yaml command.

Running the Operator

There are three ways you can use the Operator SDK CLI to build and run your Operator:

  • Run locally outside the cluster as a Go program.

  • Run as a deployment on the cluster.

  • Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.

Running locally outside the cluster

You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.

Procedure
  • Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your ~/.kube/config file and run the Operator locally:

    $ make install run
    Example output
    ...
    {"level":"info","ts":1612652419.9289865,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
    {"level":"info","ts":1612652419.9296563,"logger":"helm.controller","msg":"Watching resource","apiVersion":"demo.example.com/v1","kind":"Nginx","namespace":"","reconcilePeriod":"1m0s"}
    {"level":"info","ts":1612652419.929983,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
    {"level":"info","ts":1612652419.930015,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting EventSource","source":"kind source: demo.example.com/v1, Kind=Nginx"}
    {"level":"info","ts":1612652420.2307851,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting Controller"}
    {"level":"info","ts":1612652420.2309358,"logger":"controller-runtime.manager.controller.nginx-controller","msg":"Starting workers","worker count":8}

Preparing your Operator to use supported images

Before running your Helm-based Operator on OpenShift Container Platform, update your project to use supported images.

Procedure
  1. Update the project root-level Dockerfile to use supported images. Change the default builder image reference from:

    FROM quay.io/operator-framework/helm-operator:v1.3.0

    to:

    FROM registry.redhat.io/openshift4/ose-helm-operator:v4.7

    Use the builder image version that matches your Operator SDK version. Failure to do so can result in problems due to project layout, or scaffolding, differences, particularly when mixing newer upstream versions of the Operator SDK with downstream OpenShift Container Platform builder images.

  2. In the config/default/manager_auth_proxy_patch.yaml file, change the image value from:

    gcr.io/kubebuilder/kube-rbac-proxy:<tag>

    to use the supported image:

    registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7

Running as a deployment on the cluster

You can run your Operator project as a deployment on your cluster.

Procedure
  1. Run the following make commands to build and push the Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<image_name>:<tag>

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<image_name>:<tag>

      The name and tag of the image, for example IMG=<registry>/<user>/<image_name>:<tag>, in both the commands can also be set in your Makefile. Modify the IMG ?= controller:latest value to set your default image name.

  2. Run the following command to deploy the Operator:

    $ make deploy IMG=<registry>/<user>/<image_name>:<tag>

    By default, this command creates a namespace with the name of your Operator project in the form <project_name>-system and is used for the deployment. This command also installs the RBAC manifests from config/rbac.

  3. Verify that the Operator is running:

    $ oc get deployment -n <project_name>-system
    Example output
    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    <project_name>-controller-manager       1/1     1            1           8m

Bundling an Operator and deploying with Operator Lifecycle Manager

Operator Lifecycle Manager (OLM) helps you to install, update, and generally manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OpenShift Container Platform and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.

The Operator Bundle Format is the default packaging method for Operator SDK and OLM. You can get your Operator ready for OLM by using the Operator SDK to build, push, validate, and run a bundle image with OLM.

Prerequisites
  • Operator SDK CLI installed on a development workstation

  • OpenShift CLI (oc) v4.7+ installed

  • Operator Lifecycle Manager (OLM) installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example OpenShift Container Platform 4.7)

  • Logged into the cluster with oc using an account with cluster-admin permissions

  • Operator project initialized by using the Operator SDK

Procedure
  1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

    1. Build the image:

      $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>

      The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

    2. Push the image to a repository:

      $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
  2. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

    $ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>

    Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

    • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object

    • A bundle metadata directory named bundle/metadata

    • All custom resource definitions (CRDs) in a config/crd directory

    • A Dockerfile bundle.Dockerfile

    These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

  3. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

    1. Build the bundle image. Set BUNDLE_IMAGE with the details for the registry, user namespace, and image tag where you intend to push the image:

      $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
    2. Push the bundle image:

      $ docker push <registry>/<user>/<bundle_image_name>:<tag>
  4. Check the status of OLM on your cluster by using the following Operator SDK command:

    $ operator-sdk olm status \
        --olm-namespace=openshift-operator-lifecycle-manager
  5. Run the Operator on your cluster by using the OLM integration in Operator SDK:

    $ operator-sdk run bundle \
        [-n <namespace>] \(1)
        <registry>/<user>/<bundle_image_name>:<tag>
    1 By default, the command installs the Operator in the currently active project in your ~/.kube/config file. You can add the -n flag to set a different namespace scope for the installation.

    This command performs the following actions:

    • Create an index image with your bundle image injected.

    • Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.

    • Deploy your Operator to your cluster by creating an Operator group, subscription, install plan, and all other required objects, including RBAC.

Creating a custom resource

After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.

Prerequisites
  • Example Nginx Operator, which provides the Nginx CR, installed on a cluster

Procedure
  1. Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the make deploy command:

    $ oc project nginx-operator-system
  2. Edit the sample Nginx CR manifest at config/samples/demo_v1_nginx.yaml to contain the following specification:

    apiVersion: demo.example.com/v1
    kind: Nginx
    metadata:
      name: nginx-sample
    ...
    spec:
    ...
      replicaCount: 3
  3. The Nginx service account requires privileged access to run in OpenShift Container Platform. Add the following security context constraint (SCC) to the service account for the nginx-sample pod:

    $ oc adm policy add-scc-to-user \
        anyuid system:serviceaccount:nginx-operator-system:nginx-sample
  4. Create the CR:

    $ oc apply -f config/samples/demo_v1_nginx.yaml
  5. Ensure that the Nginx Operator creates the deployment for the sample CR with the correct size:

    $ oc get deployments
    Example output
    NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
    nginx-operator-controller-manager       1/1     1            1           8m
    nginx-sample                            3/3     3            3           1m
  6. Check the pods and CR status to confirm the status is updated with the Nginx pod names.

    1. Check the pods:

      $ oc get pods
      Example output
      NAME                                  READY     STATUS    RESTARTS   AGE
      nginx-sample-6fd7c98d8-7dqdr          1/1       Running   0          1m
      nginx-sample-6fd7c98d8-g5k7v          1/1       Running   0          1m
      nginx-sample-6fd7c98d8-m7vn7          1/1       Running   0          1m
    2. Check the CR status:

      $ oc get nginx/nginx-sample -o yaml
      Example output
      apiVersion: demo.example.com/v1
      kind: Nginx
      metadata:
      ...
        name: nginx-sample
      ...
      spec:
        replicaCount: 3
      status:
        nodes:
        - nginx-sample-6fd7c98d8-7dqdr
        - nginx-sample-6fd7c98d8-g5k7v
        - nginx-sample-6fd7c98d8-m7vn7
  7. Update the deployment size.

    1. Update config/samples/demo_v1_nginx.yaml file to change the spec.size field in the Nginx CR from 3 to 5:

      $ oc patch nginx nginx-sample \
          -p '{"spec":{"replicaCount": 5}}' \
          --type=merge
    2. Confirm that the Operator changes the deployment size:

      $ oc get deployments
      Example output
      NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
      nginx-operator-controller-manager       1/1     1            1           10m
      nginx-sample                            5/5     5            5           3m
  8. Clean up the resources that have been created as part of this tutorial.

    • If you used the make deploy command to test the Operator, run the following command:

      $ make undeploy
    • If you used the operator-sdk run bundle command to test the Operator, run the following command:

      $ operator-sdk cleanup <project_name>

Additional resources