This is a cache of https://docs.okd.io/4.10/operators/operator_sdk/osdk-upgrading-projects.html. It is a snapshot of the page at 2024-11-21T22:02:45.275+0000.
Upgrading projects for newer Operator SDK versions - Developing Operators | Operators | OKD 4.10
×

OKD 4.10 supports Operator SDK v1.16.0. If you already have the v1.10.1 CLI installed on your workstation, you can update the CLI to v1.16.0 by installing the latest version.

However, to ensure your existing Operator projects maintain compatibility with Operator SDK v1.16.0, update steps are required for the associated breaking changes introduced since v1.10.1. You must perform the update steps manually in any of your Operator projects that were previously created or maintained with v1.10.1.

Updating projects for Operator SDK v1.16.0

The following procedure updates an existing Operator project for compatibility with v1.16.0.

  • Operator SDK v1.16.0 supports Kubernetes 1.22.

  • Many deprecated v1beta1 APIs were removed in Kubernetes 1.22, including sigs.k8s.io/controller-runtime v0.10.0 and controller-gen v0.7.

  • Updating projects to Kubernetes 1.22 is a breaking change if you need to scaffold v1beta1 APIs for custom resource definitions (CRDs) or webhooks to publish your project into older cluster versions.

See Validating bundle manifests for APIs removed from Kubernetes 1.22 and Beta APIs removed from Kubernetes 1.22 for more information about changes introduced in Kubernetes 1.22.

Prerequisites
  • Operator SDK v1.16.0 installed.

  • An Operator project created or maintained with Operator SDK v1.10.1.

Procedure
  1. Add the protocol field in the config/default/manager_auth_proxy_patch.yaml and config/rbac/auth_proxy_service.yaml files:

    ...
     ports:
     - containerPort: 8443
    +  protocol: TCP
       name: https
  2. Make the following changes to the config/manager/manager.yaml file:

    1. Increase the CPU and memory resource limits:

      resources:
        limits:
      -     cpu: 100m
      -     memory: 30Mi
      +     cpu: 200m
      +     memory: 100Mi
    2. Add an annotation to specify the default container manager:

      ...
      template:
        metadata:
          annotations:
            kubectl.kubernetes.io/default-container: manager
      ...
  3. Add PHONY targets to all of the targets in your Makefile file.

  4. For Go-based Operator projects, make the following changes:

    1. Install the setup-envtest binary.

    2. Change your go.mod file to update the dependencies:

      k8s.io/api v0.22.1
      k8s.io/apimachinery v0.22.1
      k8s.io/client-go v0.22.1
      sigs.k8s.io/controller-runtime v0.10.0
    3. Run the go mod tidy command to download the dependencies:

      $ go mod tidy
    4. Make the following changes to your Makefile file:

      ...
      
      + ENVTEST_K8S_VERSION = 1.22
      
        test: manifests generate fmt vet envtest ## Run tests.
      -   go test ./... -coverprofile cover.out
      +   KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -p path)" go test ./... -coverprofile cover.out
      ...
      
      - $(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
      + $(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
      ...
      
      # Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
      - CRD_OPTIONS ?= "crd:trivialVersions=true,preserveUnknownFields=false"
      ...
      - admissionReviewVersions={v1,v1beta1}
      + admissionReviewVersions=v1
      ...
      
      + ifndef ignore-not-found
      +   ignore-not-found = false
      + endif
      
      ##@ Deployment
      ...
      - sh kubectl delete -f -
      + sh kubectl delete --ignore-not-found=$(ignore-not-found) -f -
    5. Run the make manifest command to generate your manifests with the updated version of Kubernetes:

      $ make manifest
  5. For Ansible-based Operator projects, make the following changes:

    1. Change your requirements.yml file to include the following:

      1. Replace the community.kubernetes collection with the kubernetes.core collection:

        ...
        - name: kubernetes.core
          version: "2.2.0"
        ...
      2. Update the operator_sdk.util utility from version 0.2.0 to 0.3.1:

        ...
        - name: operator_sdk.util
          version: "0.3.1"
    2. Verify the default resource limits in the config/manager/manager.yaml file:

      ...
       # TODO(user): Configure the resources accordingly based on the project requirements.
       # More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
      
      resources:
        limits:
          cpu: 500m
          memory: 768Mi
        requests:
          cpu: 10m
          memory: 256Mi

      Operator SDK scaffolds these values as a reasonable default setting. Operator authors should set and optimize resource limits based on the requirements of their project.

    3. Optional: Make the following changes if you want to run your Ansible-based Operator locally by using the make run command:

      1. Change the run target in the Makefile file:

        ANSIBLE_ROLES_PATH="$(ANSIBLE_ROLES_PATH):$(shell pwd)/roles" $(ANSIBLE_OPERATOR) run
      2. Update the local version of ansible-runner to 2.0.2 or later.

        As of version 2.0, the ansible-runner tool includes changes in the command signature that are not compatible with earlier versions.