$ mkdir -p $HOME/projects/memcached-operator
Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.
Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators. |
The Operator SDK makes it easier to build Kubernetes native applications, a process that can require deep, application-specific operational knowledge. The SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code needed for many common management capabilities, such as metering or monitoring.
This procedure walks through an example of creating a simple Memcached Operator using tools and libraries provided by the SDK.
Operator SDK v0.19.4 CLI installed on the development workstation
Operator Lifecycle Manager (OLM) installed on a Kubernetes-based cluster (v1.8
or above to support the apps/v1beta2
API group), for example OKD 4.6
Access to the cluster using an account with cluster-admin
permissions
OpenShift CLI (oc
) v4.6+ installed
Create an Operator project:
Create a directory for the project:
$ mkdir -p $HOME/projects/memcached-operator
Change to the directory:
$ cd $HOME/projects/memcached-operator
Activate support for Go modules:
$ export GO111MODULE=on
Run the operator-sdk init
command to initialize the project:
$ operator-sdk init \
--domain=example.com \
--repo=github.com/example-inc/memcached-operator
The |
Update your Operator to use supported images:
In the project root-level Dockerfile, change the default runner image reference from:
FROM gcr.io/distroless/static:nonroot
to:
FROM registry.access.redhat.com/ubi8/ubi-minimal:latest
Depending on the Go project version, your Dockerfile might contain a USER 65532:65532
or USER nonroot:nonroot
directive. In either case, remove the line, as it is not required by the supported runner image.
In the config/default/manager_auth_proxy_patch.yaml
file, change the image
value from:
gcr.io/kubebuilder/kube-rbac-proxy:<tag>
to use the supported image:
registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.6
Update the test
target in your Makefile to install dependencies required during later builds by replacing the following lines:
test
targettest: generate fmt vet manifests
go test ./... -coverprofile cover.out
With the following lines:
test
targetENVTEST_ASSETS_DIR=$(shell pwd)/testbin
test: manifests generate fmt vet ## Run tests.
mkdir -p ${ENVTEST_ASSETS_DIR}
test -f ${ENVTEST_ASSETS_DIR}/setup-envtest.sh || curl -sSLo ${ENVTEST_ASSETS_DIR}/setup-envtest.sh https://raw.githubusercontent.com/kubernetes-sigs/controller-runtime/v0.7.2/hack/setup-envtest.sh
source ${ENVTEST_ASSETS_DIR}/setup-envtest.sh; fetch_envtest_tools $(ENVTEST_ASSETS_DIR); setup_envtest_env $(ENVTEST_ASSETS_DIR); go test ./... -coverprofile cover.out
Create a custom resource definition (CRD) API and controller:
Run the following command to create an API with group cache
, version v1
, and kind Memcached
:
$ operator-sdk create api \
--group=cache \
--version=v1 \
--kind=Memcached
When prompted, enter y
for creating both the resource and controller:
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing scaffold for you to edit...
api/v1/memcached_types.go
controllers/memcached_controller.go
...
This process generates the Memcached resource API at api/v1/memcached_types.go
and the controller at controllers/memcached_controller.go
.
Modify the Go type definitions at api/v1/memcached_types.go
to have the following spec
and status
:
// MemcachedSpec defines the desired state of Memcached
type MemcachedSpec struct {
// +kubebuilder:validation:Minimum=0
// Size is the size of the memcached deployment
Size int32 `json:"size"`
}
// MemcachedStatus defines the observed state of Memcached
type MemcachedStatus struct {
// Nodes are the names of the memcached pods
Nodes []string `json:"nodes"`
}
Add the +kubebuilder:subresource:status
marker to add a status
subresource to the CRD manifest:
// Memcached is the Schema for the memcacheds API
// +kubebuilder:subresource:status (1)
type Memcached struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec MemcachedSpec `json:"spec,omitempty"`
Status MemcachedStatus `json:"status,omitempty"`
}
1 | Add this line. |
This enables the controller to update the CR status without changing the rest of the CR object.
Update the generated code for the resource type:
$ make generate
After you modify a |
The above Makefile target invokes the controller-gen
utility to update the api/v1/zz_generated.deepcopy.go
file. This ensures your API Go type definitions implement the runtime.Object
interface that all Kind
types must implement.
Generate and update CRD manifests:
$ make manifests
This Makefile target invokes the controller-gen
utility to generate the CRD manifests in the config/crd/bases/cache.example.com_memcacheds.yaml
file.
Optional: Add custom validation to your CRD.
OpenAPI v3.0 schemas are added to CRD manifests in the spec.validation
block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached
custom resource (CR) when it is created or updated.
As an Operator author, you can use annotation-like, single-line comments called Kubebuilder markers to configure custom validations for your API. These markers must always have a +kubebuilder:validation
prefix. For example, adding an enum-type specification can be done by adding the following marker:
// +kubebuilder:validation:Enum=Lion;Wolf;Dragon
type Alias string
Usage of markers in API code is discussed in the Kubebuilder Generating CRDs and Markers for Config/Code Generation documentation. A full list of OpenAPIv3 validation markers is also available in the Kubebuilder CRD Validation documentation.
If you add any custom validations, run the following command to update the OpenAPI validation section for the CRD:
$ make manifests
After creating a new API and controller, you can implement the controller logic. For this example, replace the generated controller file controllers/memcached_controller.go
with following example implementation:
memcached_controller.go
/*
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"context"
"reflect"
"github.com/go-logr/logr"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
cachev1 "github.com/example-inc/memcached-operator/api/v1"
)
// MemcachedReconciler reconciles a Memcached object
type MemcachedReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
}
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;
func (r *MemcachedReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
log := r.Log.WithValues("memcached", req.NamespacedName)
// Fetch the Memcached instance
memcached := &cachev1.Memcached{}
err := r.Get(ctx, req.NamespacedName, memcached)
if err != nil {
if errors.IsNotFound(err) {
// Request object not found, could have been deleted after reconcile request.
// Owned objects are automatically garbage collected. For additional cleanup logic use finalizers.
// Return and don't requeue
log.Info("Memcached resource not found. Ignoring since object must be deleted")
return ctrl.Result{}, nil
}
// Error reading the object - requeue the request.
log.Error(err, "Failed to get Memcached")
return ctrl.Result{}, err
}
// Check if the deployment already exists, if not create a new one
found := &appsv1.deployment{}
err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found)
if err != nil && errors.IsNotFound(err) {
// Define a new deployment
dep := r.deploymentForMemcached(memcached)
log.Info("Creating a new deployment", "deployment.Namespace", dep.Namespace, "deployment.Name", dep.Name)
err = r.Create(ctx, dep)
if err != nil {
log.Error(err, "Failed to create new deployment", "deployment.Namespace", dep.Namespace, "deployment.Name", dep.Name)
return ctrl.Result{}, err
}
// deployment created successfully - return and requeue
return ctrl.Result{Requeue: true}, nil
} else if err != nil {
log.Error(err, "Failed to get deployment")
return ctrl.Result{}, err
}
// Ensure the deployment size is the same as the spec
size := memcached.Spec.Size
if *found.Spec.Replicas != size {
found.Spec.Replicas = &size
err = r.Update(ctx, found)
if err != nil {
log.Error(err, "Failed to update deployment", "deployment.Namespace", found.Namespace, "deployment.Name", found.Name)
return ctrl.Result{}, err
}
// Spec updated - return and requeue
return ctrl.Result{Requeue: true}, nil
}
// Update the Memcached status with the pod names
// List the pods for this memcached's deployment
podList := &corev1.PodList{}
listOpts := []client.ListOption{
client.InNamespace(memcached.Namespace),
client.MatchingLabels(labelsForMemcached(memcached.Name)),
}
if err = r.List(ctx, podList, listOpts...); err != nil {
log.Error(err, "Failed to list pods", "Memcached.Namespace", memcached.Namespace, "Memcached.Name", memcached.Name)
return ctrl.Result{}, err
}
podNames := getPodNames(podList.Items)
// Update status.Nodes if needed
if !reflect.DeepEqual(podNames, memcached.Status.Nodes) {
memcached.Status.Nodes = podNames
err := r.Status().Update(ctx, memcached)
if err != nil {
log.Error(err, "Failed to update Memcached status")
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}
// deploymentForMemcached returns a memcached deployment object
func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.deployment {
ls := labelsForMemcached(m.Name)
replicas := m.Spec.Size
dep := &appsv1.deployment{
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
},
Spec: appsv1.deploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Image: "memcached:1.4.36-alpine",
Name: "memcached",
Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
Ports: []corev1.ContainerPort{{
ContainerPort: 11211,
Name: "memcached",
}},
}},
},
},
},
}
// Set Memcached instance as the owner and controller
ctrl.SetControllerReference(m, dep, r.Scheme)
return dep
}
// labelsForMemcached returns the labels for selecting the resources
// belonging to the given memcached CR name.
func labelsForMemcached(name string) map[string]string {
return map[string]string{"app": "memcached", "memcached_cr": name}
}
// getPodNames returns the pod names of the array of pods passed in
func getPodNames(pods []corev1.Pod) []string {
var podNames []string
for _, pod := range pods {
podNames = append(podNames, pod.Name)
}
return podNames
}
func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cachev1.Memcached{}).
Owns(&appsv1.deployment{}).
Complete(r)
}
The example controller runs the following reconciliation logic for each Memcached
CR:
Create a Memcached deployment if it does not exist.
Ensure that the deployment size is the same as specified by the Memcached
CR spec.
Update the Memcached
CR status with the names of the memcached
pods.
The next two sub-steps inspect how the controller watches resources and how the reconcile loop is triggered. You can skip these steps to go directly to building and running the Operator.
Inspect the controller implementation at the controllers/memcached_controller.go
file to see how the controller watches resources.
The SetupWithManager()
function specifies how the controller is built to watch a CR and other resources that are owned and managed by that controller:
SetupWithManager()
functionimport (
...
appsv1 "k8s.io/api/apps/v1"
...
)
func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cachev1.Memcached{}).
Owns(&appsv1.deployment{}).
Complete(r)
}
NewControllerManagedBy()
provides a controller builder that allows various controller configurations.
For(&cachev1.Memcached{})
specifies the Memcached
type as the primary resource to watch. For each Add, Update, or Delete event for a Memcached
type, the reconcile loop is sent a reconcile Request
argument, which consists of a namespace and name key, for that Memcached
object.
Owns(&appsv1.deployment{})
specifies the deployment
type as the secondary resource to watch. For each deployment
type Add, Update, or Delete event, the event handler maps each event to a reconcile request for the owner of the deployment. In this case, the owner is the Memcached
object for which the deployment was created.
Every controller has a reconciler object with a Reconcile()
method that implements the reconcile loop. The reconcile loop is passed the Request
argument, which is a namespace and name key used to find the primary resource object, Memcached
, from the cache:
import (
ctrl "sigs.k8s.io/controller-runtime"
cachev1 "github.com/example-inc/memcached-operator/api/v1"
...
)
func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
// Lookup the Memcached instance for this reconcile request
memcached := &cachev1.Memcached{}
err := r.Get(ctx, req.NamespacedName, memcached)
...
}
Based on the return value of the Reconcile()
function, the reconcile Request
might be requeued, and the loop might be triggered again:
// Reconcile successful - don't requeue
return reconcile.Result{}, nil
// Reconcile failed due to error - requeue
return reconcile.Result{}, err
// Requeue for any reason other than error
return reconcile.Result{Requeue: true}, nil
You can set the Result.RequeueAfter
to requeue the request after a grace period:
import "time"
// Reconcile for any reason other than an error after 5 seconds
return ctrl.Result{RequeueAfter: time.Second*5}, nil
You can return |
For more on reconcilers, clients, and interacting with resource events, see the Controller Runtime Client API documentation.
For more information about OpenAPI v3.0 validation schemas in CRDs, refer to the Kubernetes documentation.
There are two ways you can use the Operator SDK CLI to build and run your Operator:
Run locally outside the cluster as a Go program.
Run as a deployment on the cluster.
You have a Go-based Operator project as described in Creating a Go-based Operator using the Operator SDK.
You can run your Operator project as a Go program outside of the cluster. This method is useful for development purposes to speed up deployment and testing.
Run the following command to install the custom resource definitions (CRDs) in the cluster configured in your ~/.kube/config
file and run the Operator as a Go program locally:
$ make install run
...
2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"}
2021-01-10T21:09:29.017-0700 INFO setup starting manager
2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {"path": "/metrics"}
2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "source": "kind source: /, Kind="}
2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {"reconciler group": "cache.example.com", "reconciler kind": "Memcached"}
2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {"reconciler group": "cache.example.com", "reconciler kind": "Memcached", "worker count": 1}
After creating your Go-based Operator project, you can build and run your Operator as a deployment inside a cluster.
Run the following make
commands to build and push the Operator image. Modify the IMG
argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.
Build the image:
$ make docker-build IMG=<registry>/<user>/<image_name>:<tag>
The Dockerfile generated by the SDK for the Operator explicitly references |
Push the image to a repository:
$ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
The name and tag of the image, for example |
Run the following command to deploy the Operator:
$ make deploy IMG=<registry>/<user>/<image_name>:<tag>
By default, this command creates a namespace with the name of your Operator project in the form <project_name>-system
and is used for the deployment. This command also installs the RBAC manifests from config/rbac
.
Verify that the Operator is running:
$ oc get deployment -n <project_name>-system
NAME READY UP-TO-DATE AVAILABLE AGE
<project_name>-controller-manager 1/1 1 1 8m
After your Operator is installed, you can test it by creating a custom resource (CR) that is now provided on the cluster by the Operator.
Example Memcached Operator, which provides the Memcached
CR, installed on a cluster
Change to the namespace where your Operator is installed. For example, if you deployed the Operator using the make deploy
command:
$ oc project memcached-operator-system
Edit the sample Memcached
CR manifest at config/samples/cache_v1_memcached.yaml
to contain the following specification:
apiVersion: cache.example.com/v1
kind: Memcached
metadata:
name: memcached-sample
...
spec:
...
size: 3
Create the CR:
$ oc apply -f config/samples/cache_v1_memcached.yaml
Ensure that the Memcached
Operator creates the deployment for the sample CR with the correct size:
$ oc get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
memcached-operator-controller-manager 1/1 1 1 8m
memcached-sample 3/3 3 3 1m
Check the pods and CR status to confirm the status is updated with the Memcached pod names.
Check the pods:
$ oc get pods
NAME READY STATUS RESTARTS AGE
memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m
memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m
memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m
Check the CR status:
$ oc get memcached/memcached-sample -o yaml
apiVersion: cache.example.com/v1
kind: Memcached
metadata:
...
name: memcached-sample
...
spec:
size: 3
status:
nodes:
- memcached-sample-6fd7c98d8-7dqdr
- memcached-sample-6fd7c98d8-g5k7v
- memcached-sample-6fd7c98d8-m7vn7
Update the deployment size.
Update config/samples/cache_v1_memcached.yaml
file to change the spec.size
field in the Memcached
CR from 3
to 5
:
$ oc patch memcached memcached-sample \
-p '{"spec":{"size": 5}}' \
--type=merge
Confirm that the Operator changes the deployment size:
$ oc get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
memcached-operator-controller-manager 1/1 1 1 10m
memcached-sample 5/5 5 5 3m
See Appendices to learn about the project directory structures created by the Operator SDK.
This guide provides an effective demonstration of the value of the Operator Framework for building and managing Operators, but this is much more left out in the interest of brevity. The Operator Framework and its components are open source, so visit each project individually and learn what else you can do:
If you want to discuss your experience, have questions, or want to get involved, join the Operator Framework mailing list.