version: 3-alpha
...
plugins:
go.sdk.operatorframework.io/v2-alpha: {}
OpenShift Container Platform 4.8 supports Operator SDK v1.8.0. If you already have the v1.3.0 CLI installed on your workstation, you can upgrade the CLI to v1.8.0 by installing the latest version.
However, to ensure your existing Operator projects maintain compatibility with Operator SDK v1.8.0, upgrade steps are required for the associated breaking changes introduced since v1.3.0. You must perform the upgrade steps manually in any of your Operator projects that were previously created or maintained with v1.3.0.
The following upgrade steps must be performed to upgrade an existing Operator project for compatibility with v1.8.0.
Operator SDK v1.8.0 installed
Operator project that was previously created or maintained with Operator SDK v1.3.0
Make the following changes to your PROJeCT
file:
Update the PROJeCT
file plugins
object to use manifests
and scorecard
objects.
The manifests
and scorecard
plug-ins that create Operator Lifecycle Manager (OLM) and scorecard manifests now have plug-in objects for running create
subcommands to create related files.
For Go-based Operator projects, an existing Go-based plug-in configuration object is already present. While the old configuration is still supported, these new objects will be useful in the future as configuration options are added to their respective plug-ins:
version: 3-alpha
...
plugins:
go.sdk.operatorframework.io/v2-alpha: {}
version: 3-alpha
...
plugins:
manifests.sdk.operatorframework.io/v2: {}
scorecard.sdk.operatorframework.io/v2: {}
Optional: For Ansible- and Helm-based Operator projects, the plug-in configuration object previously did not exist. While you are not required to add the plug-in configuration objects, these new objects will be useful in the future as configuration options are added to their respective plug-ins:
version: 3-alpha
...
plugins:
manifests.sdk.operatorframework.io/v2: {}
scorecard.sdk.operatorframework.io/v2: {}
The PROJeCT
config version 3-alpha
must be upgraded to 3
. The version
key in your PROJeCT
file represents the PROJeCT
config version:
PROJeCT
fileversion: 3-alpha
resources:
- crdVersion: v1
...
Version 3-alpha
has been stabilized as version 3 and contains a set of config fields sufficient to fully describe a project. While this change is not technically breaking because the spec at that version was alpha, it was used by default in operator-sdk
commands, so it should be marked as breaking and have a convenient upgrade path.
Run the alpha config-3alpha-to-3
command to convert most of your PROJeCT
file from version 3-alpha
to 3
:
$ operator-sdk alpha config-3alpha-to-3
Your PROJeCT config file has been converted from version 3-alpha to 3. Please make sure all config data is correct.
The command will also output comments with directions where automatic conversion is not possible.
Verify the change:
PROJeCT
fileversion: "3"
resources:
- api:
crdVersion: v1
...
Make the following changes to your config/manager/manager.yaml
file:
For Ansible- and Helm-based Operator projects, add liveness and readiness probes.
New projects built with the Operator SDK have the probes configured by default. The endpoints /healthz
and /readyz
are available now in the provided image base. You can update your existing projects to use the probes by updating the Dockerfile
to use the latest base image, then add the following to the manager
container in the config/manager/manager.yaml
file:
livenessProbe:
httpGet:
path: /healthz
port: 6789
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 6789
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
For Ansible- and Helm-based Operator projects, add security contexts to your manager’s deployment.
In the config/manager/manager.yaml
file, add the following security contexts:
config/manager/manager.yaml
filespec:
...
template:
...
spec:
securityContext:
runAsNonRoot: true
containers:
- name: manager
securityContext:
allowPrivilegeescalation: false
Make the following changes to your Makefile
:
For Ansible- and Helm-based Operator projects, update the helm-operator
and ansible-operator
URLs in the Makefile
:
For Ansible-based Operator projects, change:
https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/ansible-operator-v1.3.0-$(ARCHOPeR)-$(OSOPeR)
to:
https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/ansible-operator_$(OS)_$(ARCH)
For Helm-based Operator projects, change:
https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/helm-operator-v1.3.0-$(ARCHOPeR)-$(OSOPeR)
to:
https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/helm-operator_$(OS)_$(ARCH)
For Ansible- and Helm-based Operator projects, update the helm-operator
, ansible-operator
, and kustomize
rules in the Makefile
. These rules download a local binary but do not use it if a global binary is present:
Makefile
diff for Ansible-based Operator projects PATH := $(PATH):$(PWD)/bin
SHeLL := env PATH=$(PATH) /bin/sh
-OS := $(shell uname -s | tr '[:upper:]' '[:lower:]')
-ARCH := $(shell uname -m | sed 's/x86_64/amd64/')
+OS = $(shell uname -s | tr '[:upper:]' '[:lower:]')
+ARCH = $(shell uname -m | sed 's/x86_64/amd64/')
+OSOPeR = $(shell uname -s | tr '[:upper:]' '[:lower:]' | sed 's/darwin/apple-darwin/' | sed 's/linux/linux-gnu/')
+ARCHOPeR = $(shell uname -m )
-# Download kustomize locally if necessary, preferring the $(pwd)/bin path over global if both exist.
-.PHONY: kustomize
-KUSTOMIZe = $(shell pwd)/bin/kustomize
kustomize:
-ifeq (,$(wildcard $(KUSTOMIZe)))
-ifeq (,$(shell which kustomize 2>/dev/null))
+ifeq (, $(shell which kustomize 2>/dev/null))
@{ \
set -e ;\
- mkdir -p $(dir $(KUSTOMIZe)) ;\
- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | \
- tar xzf - -C bin/ ;\
+ mkdir -p bin ;\
+ curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | tar xzf - -C bin/ ;\
}
+KUSTOMIZe=$(realpath ./bin/kustomize)
else
-KUSTOMIZe = $(shell which kustomize)
-endif
+KUSTOMIZe=$(shell which kustomize)
endif
-# Download ansible-operator locally if necessary, preferring the $(pwd)/bin path over global if both exist.
-.PHONY: ansible-operator
-ANSIBLe_OPeRATOR = $(shell pwd)/bin/ansible-operator
ansible-operator:
-ifeq (,$(wildcard $(ANSIBLe_OPeRATOR)))
-ifeq (,$(shell which ansible-operator 2>/dev/null))
+ifeq (, $(shell which ansible-operator 2>/dev/null))
@{ \
set -e ;\
- mkdir -p $(dir $(ANSIBLe_OPeRATOR)) ;\
- curl -sSLo $(ANSIBLe_OPeRATOR) https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/ansible-operator_$(OS)_$(ARCH) ;\
- chmod +x $(ANSIBLe_OPeRATOR) ;\
+ mkdir -p bin ;\
+ curl -LO https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/ansible-operator-v1.8.0-$(ARCHOPeR)-$(OSOPeR) ;\
+ mv ansible-operator-v1.8.0-$(ARCHOPeR)-$(OSOPeR) ./bin/ansible-operator ;\
+ chmod +x ./bin/ansible-operator ;\
}
+ANSIBLe_OPeRATOR=$(realpath ./bin/ansible-operator)
else
-ANSIBLe_OPeRATOR = $(shell which ansible-operator)
-endif
+ANSIBLe_OPeRATOR=$(shell which ansible-operator)
endif
Makefile
diff for Helm-based Operator projects PATH := $(PATH):$(PWD)/bin
SHeLL := env PATH=$(PATH) /bin/sh
-OS := $(shell uname -s | tr '[:upper:]' '[:lower:]')
-ARCH := $(shell uname -m | sed 's/x86_64/amd64/')
+OS = $(shell uname -s | tr '[:upper:]' '[:lower:]')
+ARCH = $(shell uname -m | sed 's/x86_64/amd64/')
+OSOPeR = $(shell uname -s | tr '[:upper:]' '[:lower:]' | sed 's/darwin/apple-darwin/' | sed 's/linux/linux-gnu/')
+ARCHOPeR = $(shell uname -m )
-# Download kustomize locally if necessary, preferring the $(pwd)/bin path over global if both exist.
-.PHONY: kustomize
-KUSTOMIZe = $(shell pwd)/bin/kustomize
kustomize:
-ifeq (,$(wildcard $(KUSTOMIZe)))
-ifeq (,$(shell which kustomize 2>/dev/null))
+ifeq (, $(shell which kustomize 2>/dev/null))
@{ \
set -e ;\
- mkdir -p $(dir $(KUSTOMIZe)) ;\
- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | \
- tar xzf - -C bin/ ;\
+ mkdir -p bin ;\
+ curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.4_$(OS)_$(ARCH).tar.gz | tar xzf - -C bin/ ;\
}
+KUSTOMIZe=$(realpath ./bin/kustomize)
else
-KUSTOMIZe = $(shell which kustomize)
-endif
+KUSTOMIZe=$(shell which kustomize)
endif
-# Download helm-operator locally if necessary, preferring the $(pwd)/bin path over global if both exist.
-.PHONY: helm-operator
-HeLM_OPeRATOR = $(shell pwd)/bin/helm-operator
helm-operator:
-ifeq (,$(wildcard $(HeLM_OPeRATOR)))
-ifeq (,$(shell which helm-operator 2>/dev/null))
+ifeq (, $(shell which helm-operator 2>/dev/null))
@{ \
set -e ;\
- mkdir -p $(dir $(HeLM_OPeRATOR)) ;\
- curl -sSLo $(HeLM_OPeRATOR) https://github.com/operator-framework/operator-sdk/releases/download/v1.3.0/helm-operator_$(OS)_$(ARCH) ;\
- chmod +x $(HeLM_OPeRATOR) ;\
+ mkdir -p bin ;\
+ curl -LO https://github.com/operator-framework/operator-sdk/releases/download/v1.8.0/helm-operator-v1.8.0-$(ARCHOPeR)-$(OSOPeR) ;\
+ mv helm-operator-v1.8.0-$(ARCHOPeR)-$(OSOPeR) ./bin/helm-operator ;\
+ chmod +x ./bin/helm-operator ;\
}
+HeLM_OPeRATOR=$(realpath ./bin/helm-operator)
else
-HeLM_OPeRATOR = $(shell which helm-operator)
-endif
+HeLM_OPeRATOR=$(shell which helm-operator)
endif
Move the positional directory argument .
in the make
target for docker-build
.
The directory argument .
in the docker-build
target was moved to the last positional argument to align with podman
CLI expectations, which makes substitution cleaner:
docker-build:
docker build . -t ${IMG}
docker-build:
docker build -t ${IMG} .
You can make this change by running the following command:
$ sed -i 's/docker build . -t ${IMG}/docker build -t ${IMG} ./' $(git grep -l 'docker.*build \. ')
For Ansible- and Helm-based Operator projects, add a help
target to the Makefile
.
Ansible- and Helm-based projects now provide help
target in the Makefile
by default, similar to a --help
flag. You can manually add this target to your Makefile
using the following lines:
help
target##@ General
# The help target prints out all targets with their descriptions organized
# beneath their categories. The categories are represented by '##@' and the
# target descriptions by '##'. The awk commands is responsible for reading the
# entire set of makefiles included in this invocation, looking for lines of the
# file as xyz: ## something, and then pretty-format the target and help. Then,
# if there's a line with ##@ something, that gets pretty-printed as a category.
# More info on the usage of ANSI control characters for terminal formatting:
# https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_parameters
# More info on the awk command:
# http://linuxcommand.org/lc3_adv_awk.php
help: ## Display this help.
@awk 'BeGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKeFILe_LIST)
Add opm
and catalog-build
targets. You can use these targets to create your own catalogs for your Operator or add your Operator bundles to an existing catalog:
Add the targets to your Makefile
by adding the following lines:
opm
and catalog-build
targets.PHONY: opm
OPM = ./bin/opm
opm:
ifeq (,$(wildcard $(OPM)))
ifeq (,$(shell which opm 2>/dev/null))
@{ \
set -e ;\
mkdir -p $(dir $(OPM)) ;\
curl -sSLo $(OPM) https://github.com/operator-framework/operator-registry/releases/download/v1.15.1/$(OS)-$(ARCH)-opm ;\
chmod +x $(OPM) ;\
}
else
OPM = $(shell which opm)
endif
endif
BUNDLe_IMGS ?= $(BUNDLe_IMG)
CATALOG_IMG ?= $(IMAGe_TAG_BASe)-catalog:v$(VeRSION) ifneq ($(origin CATALOG_BASe_IMG), undefined) FROM_INDeX_OPT := --from-index $(CATALOG_BASe_IMG) endif
.PHONY: catalog-build
catalog-build: opm
$(OPM) index add --container-tool docker --mode semver --tag $(CATALOG_IMG) --bundles $(BUNDLe_IMGS) $(FROM_INDeX_OPT)
.PHONY: catalog-push
catalog-push: ## Push the catalog image.
$(MAKe) docker-push IMG=$(CATALOG_IMG)
If you are updating a Go-based Operator project, also add the following Makefile
variables:
Makefile
variablesOS = $(shell go env GOOS)
ARCH = $(shell go env GOARCH)
For Go-based Operator projects, set the SHeLL
variable in your Makefile
to the system bash
binary.
Importing the setup-envtest.sh
script requires bash
, so the SHeLL
variable must be set to bash
with error options:
Makefile
diffelse GOBIN=$(shell go env GOBIN)
endif
+# Setting SHeLL to bash allows bash commands to be executed by recipes.
+# This is a requirement for 'setup-envtest.sh' in the test target.
+# Options are set to exit when a recipe line exits non-zero or a piped command fails.
+SHeLL = /usr/bin/env bash -o pipefail
+.SHeLLFLAGS = -ec
+ all: build
For Go-based Operator projects, upgrade controller-runtime
to v0.8.3 and Kubernetes dependencies to v0.20.2 by changing the following entries in your go.mod
file, then rebuild your project:
go.mod
file...
k8s.io/api v0.20.2
k8s.io/apimachinery v0.20.2
k8s.io/client-go v0.20.2
sigs.k8s.io/controller-runtime v0.8.3
Add a system:controller-manager
service account to your project. A non-default service account controller-manager
is now generated by the operator-sdk init
command to improve security for Operators installed in shared namespaces. To add this service account to your existing project, follow these steps:
Create the ServiceAccount
definition in a file:
config/rbac/service_account.yaml
fileapiVersion: v1
kind: ServiceAccount
metadata:
name: controller-manager
namespace: system
Add the service account to the list of RBAC resources:
$ echo "- service_account.yaml" >> config/rbac/kustomization.yaml
Update all RoleBinding
and ClusterRoleBinding
objects that reference the Operator’s service account:
$ find config/rbac -name *_binding.yaml -exec sed -i -e 's/ name: default/ name: controller-manager/g' {} \;
Add the service account name to the manager deployment’s spec.template.spec.serviceAccountName
field:
$ sed -i -e 's/([ ]+)(terminationGracePeriodSeconds:)/\1serviceAccountName: controller-manager\n\1\2/g' config/manager/manager.yaml
Verify the changes look like the following diffs:
config/manager/manager.yaml
file diff...
requests:
cpu: 100m
memory: 20Mi
+ serviceAccountName: controller-manager
terminationGracePeriodSeconds: 10
config/rbac/auth_proxy_role_binding.yaml
file diff...
name: proxy-role
subjects:
- kind: ServiceAccount
- name: default
+ name: controller-manager
namespace: system
config/rbac/kustomization.yaml
file diff resources:
+- service_account.yaml
- role.yaml
- role_binding.yaml
- leader_election_role.yaml
config/rbac/leader_election_role_binding.yaml
file diff...
name: leader-election-role
subjects:
- kind: ServiceAccount
- name: default
+ name: controller-manager
namespace: system
config/rbac/role_binding.yaml
file diff...
name: manager-role
subjects:
- kind: ServiceAccount
- name: default
+ name: controller-manager
namespace: system
config/rbac/service_account.yaml
file diff+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: controller-manager
+ namespace: system
Make the following changes to your config/manifests/kustomization.yaml
file:
Add a Kustomize patch to remove the cert-manager volume
and volumeMount
objects from your cluster service version (CSV).
Because Operator Lifecycle Manager (OLM) does not yet support cert-manager, a JSON patch was added to remove this volume and mount so OLM can create and manage certificates for your Operator.
In the config/manifests/kustomization.yaml
file, add the following lines:
config/manifests/kustomization.yaml
filepatchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: controller-manager
namespace: system
patch: |-
# Remove the manager container's "cert" volumeMount, since OLM will create and mount a set of certs.
# Update the indices in this path if adding or removing containers/volumeMounts in the manager's Deployment.
- op: remove
path: /spec/template/spec/containers/1/volumeMounts/0
# Remove the "cert" volume, since OLM will create and mount a set of certs.
# Update the indices in this path if adding or removing volumes in the manager's Deployment.
- op: remove
path: /spec/template/spec/volumes/0
Optional: For Ansible- and Helm-based Operator projects, configure ansible-operator
and helm-operator
with a component config. To add this option, follow these steps:
Create the following file:
config/default/manager_config_patch.yaml
fileapiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: manager
args:
- "--config=controller_manager_config.yaml"
volumeMounts:
- name: manager-config
mountPath: /controller_manager_config.yaml
subPath: controller_manager_config.yaml
volumes:
- name: manager-config
configMap:
name: manager-config
Create the following file:
config/manager/controller_manager_config.yaml
fileapiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :6789
metrics:
bindAddress: 127.0.0.1:8080
leaderelection:
leaderelect: true
resourceName: <resource_name>
Update the config/default/kustomization.yaml
file by applying the following changes to resources
:
config/default/kustomization.yaml
file resources:
...
- manager_config_patch.yaml
Update the config/manager/kustomization.yaml
file by applying the following changes:
config/manager/kustomization.yaml
file generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- files:
- controller_manager_config.yaml
name: manager-config
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: controller
newName: quay.io/example/memcached-operator
newTag: v0.0.1
Optional: Add a manager config patch to the config/default/kustomization.yaml
file.
The generated --config
flag was not added to either the ansible-operator
or helm-operator
binary when config file support was originally added, so it does not currently work. The --config
flag supports configuration of both binaries by file; this method of configuration only applies to the underlying controller manager and not the Operator as a whole.
To optionally configure the Operator’s deployment with a config file, make changes to the config/default/kustomization.yaml
file as shown in the following diff:
config/default/kustomization.yaml
file diff# If you want your controller-manager to expose the /metrics # endpoint w/o any authn/z, please comment the following line.
\- manager_auth_proxy_patch.yaml
+# Mount the controller config file for loading manager configurations
+# through a ComponentConfig type
+- manager_config_patch.yaml
Flags can be used as is or to override config file values.
For Ansible- and Helm-based Operator projects, add role rules for leader election by making the following changes to the config/rbac/leader_election_role.yaml
file:
config/rbac/leader_election_role.yaml
file- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
For Ansible-based Operator projects, update Ansible collections.
In your requirements.yml
file, change the version
field for community.kubernetes
to 1.2.1
, and the version
field for operator_sdk.util
to 0.2.0
.
Make the following changes to your config/default/manager_auth_proxy_patch.yaml
file:
For Ansible-based Operator projects, add the --health-probe-bind-address=:6789
argument to the config/default/manager_auth_proxy_patch.yaml
file:
config/default/manager_auth_proxy_patch.yaml
filespec:
template:
spec:
containers:
- name: manager
args:
- "--health-probe-bind-address=:6789"
...
For Helm-based Operator projects:
Add the --health-probe-bind-address=:8081
argument to the config/default/manager_auth_proxy_patch.yaml
file:
config/default/manager_auth_proxy_patch.yaml
filespec:
template:
spec:
containers:
- name: manager
args:
- "--health-probe-bind-address=:8081"
...
Replace the deprecated flag --enable-leader-election
with --leader-elect
, and the deprecated flag --metrics-addr
with --metrics-bind-address
.
Make the following changes to your config/prometheus/monitor.yaml
file:
Add scheme, token, and TLS config to the Prometheus ServiceMonitor
metrics endpoint.
The /metrics
endpoint, while specifying the https
port on the manager pod, was not actually configured to serve over HTTPS because no tlsConfig
was set. Because kube-rbac-proxy
secures this endpoint as a manager sidecar, using the service account token mounted into the pod by default corrects this problem.
Apply the changes to the config/prometheus/monitor.yaml
file as shown in the following diff:
config/prometheus/monitor.yaml
file diffspec:
endpoints:
- path: /metrics
port: https
+ scheme: https
+ bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
+ tlsConfig:
+ insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
If you removed |
ensure that existing dependent resources have owner annotations.
For Ansible-based Operator projects, owner reference annotations on cluster-scoped dependent resources and dependent resources in other namespaces were not applied correctly. A workaround was to add these annotations manually, which is no longer required as this bug has been fixed.
Deprecate support for package manifests.
The Operator Framework is removing support for the Operator package manifest format in a future release. As part of the ongoing deprecation process, the operator-sdk generate packagemanifests
and operator-sdk run packagemanifests
commands are now deprecated. To migrate package manifests to bundles, the operator-sdk pkgman-to-bundle
command can be used.
Run the operator-sdk pkgman-to-bundle --help
command and see "Migrating package manifest projects to bundle format" for more details.
Update the finalizer names for your Operator.
The finalizer name format suggested by Kubernetes documentation is:
<qualified_group>/<finalizer_name>
while the format previously documented for Operator SDK was:
<finalizer_name>.<qualified_group>
If your Operator uses any finalizers with names that match the incorrect format, change them to match the official format. For example, finalizer.cache.example.com
must be changed to cache.example.com/finalizer
.
Your Operator project is now compatible with Operator SDK v1.8.0.