This is a cache of https://docs.openshift.com/container-platform/4.4/operators/operator_sdk/osdk-ansible.html. It is a snapshot of the page at 2024-11-23T01:14:37.931+0000.
Creating Ansible-based Operators - Operator SDK | Operators | OpenShift Container Platform 4.4
×

Ansible support in the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. This framework includes the Operator SDK, which assists developers in bootstrapping and building an Operator based on their expertise without requiring knowledge of Kubernetes API complexities.

One of the Operator SDK’s options for generating an Operator project includes leveraging existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.

Custom Resource files

Operators use the Kubernetes' extension mechanism, Custom Resource Definitions (CRDs), so your Custom Resource (CR) looks and acts just like the built-in, native Kubernetes objects.

The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:

Table 1. Custom Resource fields
Field Description

apiVersion

Version of the CR to be created.

kind

Kind of the CR to be created.

metadata

Kubernetes-specific metadata to be created.

spec (optional)

Key-value list of variables which are passed to Ansible. This field is empty by default.

status

Summarizes the current state of the object. For Ansible-based Operators, the status subresource is enabled for CRDs and managed by the operator_sdk.util.k8s_status Ansible module by default, which includes condition information to the CR’s status.

annotations

Kubernetes-specific annotations to be appended to the CR.

The following list of CR annotations modify the behavior of the Operator:

Table 2. Ansible-based Operator annotations
Annotation Description

ansible.operator-sdk/reconcile-period

Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package time. Specifically, ParseDuration is used which applies the default suffix of s, giving the value in seconds.

Example Ansible-based Operator annotation
apiVersion: "foo.example.com/v1alpha1"
kind: "Foo"
metadata:
  name: "example"
annotations:
  ansible.operator-sdk/reconcile-period: "30s"

Watches file

The Watches file contains a list of mappings from Custom Resources (CRs), identified by its Group, Version, and Kind, to an Ansible role or playbook. The Operator expects this mapping file in a predefined location, /opt/ansible/watches.yaml.

Table 3. Watches file mappings
Field Description

group

Group of CR to watch.

version

Version of CR to watch.

kind

Kind of CR to watch

role (default)

Path to the Ansible role added to the container. For example, if your roles directory is at /opt/ansible/roles/ and your role is named busybox, this value would be /opt/ansible/roles/busybox. This field is mutually exclusive with the playbook field.

playbook

Path to the Ansible playbook added to the container. This playbook is expected to be simply a way to call roles. This field is mutually exclusive with the role field.

reconcilePeriod (optional)

The reconciliation interval, how often the role or playbook is run, for a given CR.

manageStatus (optional)

When set to true (default), the Operator manages the status of the CR generically. When set to false, the status of the CR is managed elsewhere, by the specified role or playbook or in a separate controller.

Example Watches file
- version: v1alpha1 (1)
  group: foo.example.com
  kind: Foo
  role: /opt/ansible/roles/Foo

- version: v1alpha1 (2)
  group: bar.example.com
  kind: Bar
  playbook: /opt/ansible/playbook.yml

- version: v1alpha1 (3)
  group: baz.example.com
  kind: Baz
  playbook: /opt/ansible/baz.yml
  reconcilePeriod: 0
  manageStatus: false
1 Simple example mapping Foo to the Foo role.
2 Simple example mapping Bar to a playbook.
3 More complex example for the Baz kind. Disables re-queuing and managing the CR status in the playbook.

Advanced options

Advanced features can be enabled by adding them to your Watches file per GVK (group, version, and kind). They can go below the group, version, kind and playbook or role fields.

Some features can be overridden per resource using an annotation on that Custom Resource (CR). The options that can be overridden have the annotation specified below.

Table 4. Advanced Watches file options
Feature YAML key Description Annotation for override Default value

Reconcile period

reconcilePeriod

Time between reconcile runs for a particular CR.

ansbile.operator-sdk/reconcile-period

1m

Manage status

manageStatus

Allows the Operator to manage the conditions section of each CR’s status section.

true

Watch dependent resources

watchDependentResources

Allows the Operator to dynamically watch resources that are created by Ansible.

true

Watch cluster-scoped resources

watchClusterScopedResources

Allows the Operator to watch cluster-scoped resources that are created by Ansible.

false

Max runner artifacts

maxRunnerArtifacts

Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource.

ansible.operator-sdk/max-runner-artifacts

20

Example Watches file with advanced options
- version: v1alpha1
  group: app.example.com
  kind: AppService
  playbook: /opt/ansible/playbook.yml
  maxRunnerArtifacts: 30
  reconcilePeriod: 5s
  manageStatus: False
  watchDependentResources: False

Extra variables sent to Ansible

Extra variables can be sent to Ansible, which are then managed by the Operator. The spec section of the Custom Resource (CR) passes along the key-value pairs as extra variables. This is equivalent to extra variables passed in to the ansible-playbook command.

The Operator also passes along additional variables under the meta field for the name of the CR and the namespace of the CR.

For the following CR example:

apiVersion: "app.example.com/v1alpha1"
kind: "Database"
metadata:
  name: "example"
spec:
  message:"Hello world 2"
  newParameter: "newParam"

The structure passed to Ansible as extra variables is:

{ "meta": {
        "name": "<cr_name>",
        "namespace": "<cr_namespace>",
  },
  "message": "Hello world 2",
  "new_parameter": "newParam",
  "_app_example_com_database": {
     <full_crd>
   },
}

The message and newParameter fields are set in the top level as extra variables, and meta provides the relevant metadata for the CR as defined in the Operator. The meta fields can be accessed using dot notation in Ansible, for example:

- debug:
    msg: "name: {{ meta.name }}, {{ meta.namespace }}"

Ansible Runner directory

Ansible Runner keeps information about Ansible runs in the container. This is located at /tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>.

Additional resources

Installing the Operator SDK CLI

The Operator SDK has a CLI tool that assists developers in creating, building, and deploying a new Operator project. You can install the SDK CLI on your workstation so you are prepared to start authoring your own Operators.

This guide uses minikube v0.25.0+ as the local Kubernetes cluster and Quay.io for the public registry.

Installing from GitHub release

You can download and install a pre-built release binary of the SDK CLI from the project on GitHub.

Prerequisites
  • Go v1.13+

  • docker v17.03+, podman v1.2.0+, or buildah v1.7+

  • OpenShift CLI (oc) 4.4+ installed

  • Access to a cluster based on Kubernetes v1.12.0+

  • Access to a container registry

Procedure
  1. Set the release version variable:

    RELEASE_VERSION=v0.15.0
  2. Download the release binary.

    • For Linux:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  3. Verify the downloaded release binary.

    1. Download the provided ASC file.

      • For Linux:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
    2. Place the binary and corresponding ASC file into the same directory and run the following command to verify the binary:

      • For Linux:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc

      If you do not have the maintainer’s public key on your workstation, you will get the following error:

      $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
      $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin'
      $ gpg: Signature made Fri Apr  5 20:03:22 2019 CEST
      $ gpg:                using RSA key <key_id> (1)
      $ gpg: Can't check signature: No public key
      1 RSA key string.

      To download the key, run the following command, replacing <key_id> with the RSA key string provided in the output of the previous command:

      $ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>" (1)
      1 If you do not have a key server configured, specify one with the --keyserver option.
  4. Install the release binary in your PATH:

    • For Linux:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  5. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

Installing from Homebrew

You can install the SDK CLI using Homebrew.

Prerequisites
  • Homebrew

  • docker v17.03+, podman v1.2.0+, or buildah v1.7+

  • OpenShift CLI (oc) 4.4+ installed

  • Access to a cluster based on Kubernetes v1.12.0+

  • Access to a container registry

Procedure
  1. Install the SDK CLI using the brew command:

    $ brew install operator-sdk
  2. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

Compiling and installing from source

You can obtain the Operator SDK source code to compile and install the SDK CLI.

Prerequisites
  • Git

  • Go v1.13+

  • docker v17.03+, podman v1.2.0+, or buildah v1.7+

  • OpenShift CLI (oc) 4.4+ installed

  • Access to a cluster based on Kubernetes v1.12.0+

  • Access to a container registry

Procedure
  1. Clone the operator-sdk repository:

    $ mkdir -p $GOPATH/src/github.com/operator-framework
    $ cd $GOPATH/src/github.com/operator-framework
    $ git clone https://github.com/operator-framework/operator-sdk
    $ cd operator-sdk
  2. Check out the desired release branch:

    $ git checkout master
  3. Compile and install the SDK CLI:

    $ make dep
    $ make install

    This installs the CLI binary operator-sdk at $GOPATH/bin.

  4. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

Building an Ansible-based Operator using the Operator SDK

This procedure walks through an example of building a simple Memcached Operator powered by Ansible playbooks and modules using tools and libraries provided by the Operator SDK.

Prerequisites
  • Operator SDK CLI installed on the development workstation

  • Access to a Kubernetes-based cluster v1.11.3+ (for example OpenShift Container Platform 4.4) using an account with cluster-admin permissions

  • OpenShift CLI (oc) v4.1+ installed

  • ansible v2.6.0+

  • ansible-runner v1.1.0+

  • ansible-runner-http v1.0.0+

Procedure
  1. Create a new Operator project. A namespace-scoped Operator watches and manages resources in a single namespace. Namespace-scoped Operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions.

    To create a new Ansible-based, namespace-scoped memcached-operator project and change to its directory, use the following commands:

    $ operator-sdk new memcached-operator \
        --api-version=cache.example.com/v1alpha1 \
        --kind=Memcached \
        --type=ansible
    $ cd memcached-operator

    This creates the memcached-operator project specifically for watching the Memcached resource with APIVersion example.com/v1apha1 and Kind Memcached.

  2. Customize the Operator logic.

    For this example, the memcached-operator executes the following reconciliation logic for each Memcached Custom Resource (CR):

    • Create a memcached deployment if it does not exist.

    • Ensure that the deployment size is the same as specified by the Memcached CR.

    By default, the memcached-operator watches Memcached resource events as shown in the watches.yaml file and executes the Ansible role Memcached:

    - version: v1alpha1
      group: cache.example.com
      kind: Memcached

    You can optionally customize the following logic in the watches.yaml file:

    1. Specifying a role option configures the Operator to use this specified path when launching ansible-runner with an Ansible role. By default, the new command fills in an absolute path to where your role should go:

      - version: v1alpha1
        group: cache.example.com
        kind: Memcached
        role: /opt/ansible/roles/memcached
    2. Specifying a playbook option in the watches.yaml file configures the Operator to use this specified path when launching ansible-runner with an Ansible playbook:

      - version: v1alpha1
        group: cache.example.com
        kind: Memcached
        playbook: /opt/ansible/playbook.yaml
  3. Build the Memcached Ansible role.

    Modify the generated Ansible role under the roles/memcached/ directory. This Ansible role controls the logic that is executed when a resource is modified.

    1. Define the Memcached spec.

      Defining the spec for an Ansible-based Operator can be done entirely in Ansible. The Ansible Operator passes all key-value pairs listed in the CR spec field along to Ansible as variables. The names of all variables in the spec field are converted to snake case (lowercase with an underscore) by the Operator before running Ansible. For example, serviceAccount in the spec becomes service_account in Ansible.

      You should perform some type validation in Ansible on the variables to ensure that your application is receiving expected input.

      In case the user does not set the spec field, set a default by modifying the roles/memcached/defaults/main.yml file:

      size: 1
    2. Define the Memcached deployment.

      With the Memcached spec now defined, you can define what Ansible is actually executed on resource changes. Because this is an Ansible role, the default behavior executes the tasks in the roles/memcached/tasks/main.yml file.

      The goal is for Ansible to create a deployment if it does not exist, which runs the memcached:1.4.36-alpine image. Ansible 2.7+ supports the k8s Ansible module, which this example leverages to control the deployment definition.

      Modify the roles/memcached/tasks/main.yml to match the following:

      - name: start memcached
        k8s:
          definition:
            kind: deployment
            apiVersion: apps/v1
            metadata:
              name: '{{ meta.name }}-memcached'
              namespace: '{{ meta.namespace }}'
            spec:
              replicas: "{{size}}"
              selector:
                matchLabels:
                  app: memcached
              template:
                metadata:
                  labels:
                    app: memcached
                spec:
                  containers:
                  - name: memcached
                    command:
                    - memcached
                    - -m=64
                    - -o
                    - modern
                    - -v
                    image: "docker.io/memcached:1.4.36-alpine"
                    ports:
                      - containerPort: 11211

      This example used the size variable to control the number of replicas of the Memcached deployment. This example sets the default to 1, but any user can create a CR that overwrites the default.

  4. Deploy the CRD.

    Before running the Operator, Kubernetes needs to know about the new Custom Resource Definition (CRD) the Operator will be watching. Deploy the Memcached CRD:

    $ oc create -f deploy/crds/cache.example.com_memcacheds_crd.yaml
  5. Build and run the Operator.

    There are two ways to build and run the Operator:

    • As a Pod inside a Kubernetes cluster.

    • As a Go program outside the cluster using the operator-sdk up command.

    Choose one of the following methods:

    1. Run as a Pod inside a Kubernetes cluster. This is the preferred method for production use.

      1. Build the memcached-operator image and push it to a registry:

        $ operator-sdk build quay.io/example/memcached-operator:v0.0.1
        $ podman push quay.io/example/memcached-operator:v0.0.1
      2. deployment manifests are generated in the deploy/operator.yaml file. The deployment image in this file needs to be modified from the placeholder REPLACE_IMAGE to the previous built image. To do this, run:

        $ sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
      3. Deploy the memcached-operator:

        $ oc create -f deploy/service_account.yaml
        $ oc create -f deploy/role.yaml
        $ oc create -f deploy/role_binding.yaml
        $ oc create -f deploy/operator.yaml
      4. Verify that the memcached-operator is up and running:

        $ oc get deployment
        NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
        memcached-operator       1         1         1            1           1m
    2. Run outside the cluster. This method is preferred during the development cycle to speed up deployment and testing.

      Ensure that Ansible Runner and Ansible Runner HTTP Plug-in are installed or else you will see unexpected errors from Ansible Runner when a CR is created.

      It is also important that the role path referenced in the watches.yaml file exists on your machine. Because normally a container is used where the role is put on disk, the role must be manually copied to the configured Ansible roles path (for example /etc/ansible/roles).

      1. To run the Operator locally with the default Kubernetes configuration file present at $HOME/.kube/config:

        $ operator-sdk run --local

        To run the Operator locally with a provided Kubernetes configuration file:

        $ operator-sdk run --local --kubeconfig=config
  6. Create a Memcached CR.

    1. Modify the deploy/crds/cache_v1alpha1_memcached_cr.yaml file as shown and create a Memcached CR:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml
      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 3
      
      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    2. Ensure that the memcached-operator creates the deployment for the CR:

      $ oc get deployment
      NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator       1         1         1            1           2m
      example-memcached        3         3         3            3           1m
    3. Check the pods to confirm three replicas were created:

      $ oc get pods
      NAME                                  READY     STATUS    RESTARTS   AGE
      example-memcached-6fd7c98d8-7dqdr     1/1       Running   0          1m
      example-memcached-6fd7c98d8-g5k7v     1/1       Running   0          1m
      example-memcached-6fd7c98d8-m7vn7     1/1       Running   0          1m
      memcached-operator-7cc7cfdf86-vvjqk   1/1       Running   0          2m
  7. Update the size.

    1. Change the spec.size field in the memcached CR from 3 to 4 and apply the change:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml
      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 4
      
      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    2. Confirm that the Operator changes the deployment size:

      $ oc get deployment
      NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      example-memcached    4         4         4            4           5m
  8. Clean up the resources:

    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    $ oc delete -f deploy/operator.yaml
    $ oc delete -f deploy/role_binding.yaml
    $ oc delete -f deploy/role.yaml
    $ oc delete -f deploy/service_account.yaml
    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_crd.yaml

Managing application lifecycle using the k8s Ansible module

To manage the lifecycle of your application on Kubernetes using Ansible, you can use the k8s Ansible module. This Ansible module allows a developer to either leverage their existing Kubernetes resource files (written in YAML) or express the lifecycle management in native Ansible.

One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.

This section goes into detail on usage of the k8s Ansible module. To get started, install the module on your local workstation and test it using a playbook before moving on to using it within an Operator.

Installing the k8s Ansible module

To install the k8s Ansible module on your local workstation:

Procedure
  1. Install Ansible 2.6+:

    $ sudo yum install ansible
  2. Install the OpenShift python client package using pip:

    $ pip install openshift

Testing the k8s Ansible module locally

Sometimes, it is beneficial for a developer to run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.

Procedure
  1. Initialize a new Ansible-based Operator project:

    $ operator-sdk new --type ansible --kind Foo --api-version foo.example.com/v1alpha1 foo-operator
    Create foo-operator/tmp/init/galaxy-init.sh
    Create foo-operator/tmp/build/Dockerfile
    Create foo-operator/tmp/build/test-framework/Dockerfile
    Create foo-operator/tmp/build/go-test.sh
    Rendering Ansible Galaxy role [foo-operator/roles/Foo]...
    Cleaning up foo-operator/tmp/init
    Create foo-operator/watches.yaml
    Create foo-operator/deploy/rbac.yaml
    Create foo-operator/deploy/crd.yaml
    Create foo-operator/deploy/cr.yaml
    Create foo-operator/deploy/operator.yaml
    Run git init ...
    Initialized empty Git repository in /home/dymurray/go/src/github.com/dymurray/opsdk/foo-operator/.git/
    Run git init done
    $ cd foo-operator
  2. Modify the roles/foo/tasks/main.yml file with the desired Ansible logic. This example creates and deletes a namespace with the switch of a variable.

    - name: set test namespace to {{ state }}
      k8s:
        api_version: v1
        kind: Namespace
        state: "{{ state }}"
        name: test
      ignore_errors: true (1)
    1 Setting ignore_errors: true ensures that deleting a nonexistent project does not fail.
  3. Modify the roles/foo/defaults/main.yml file to set state to present by default.

    state: present
  4. Create an Ansible playbook playbook.yml in the top-level directory, which includes the Foo role:

    - hosts: localhost
      roles:
        - Foo
  5. Run the playbook:

    $ ansible-playbook playbook.yml
     [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ***************************************************************************
    
    PROCEDURE [Gathering Facts] *********************************************************************
    ok: [localhost]
    
    Task [Foo : set test namespace to present]
    changed: [localhost]
    
    PLAY RECAP *********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0
  6. Check that the namespace was created:

    $ oc get namespace
    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d
    test          Active    3s
  7. Rerun the playbook setting state to absent:

    $ ansible-playbook playbook.yml --extra-vars state=absent
     [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ***************************************************************************
    
    PROCEDURE [Gathering Facts] *********************************************************************
    ok: [localhost]
    
    Task [Foo : set test namespace to absent]
    changed: [localhost]
    
    PLAY RECAP *********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0
  8. Check that the namespace was deleted:

    $ oc get namespace
    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d

Testing the k8s Ansible module inside an Operator

After you are familiar using the k8s Ansible module locally, you can trigger the same Ansible logic inside of an Operator when a Custom Resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the Watches file.

Testing an Ansible-based Operator locally

After getting comfortable testing Ansible workflows locally, you can test the logic inside of an Ansible-based Operator running locally.

To do so, use the operator-sdk run --local command from the top-level directory of your Operator project. This command reads from the ./watches.yaml file and uses the ~/.kube/config file to communicate with a Kubernetes cluster just as the k8s Ansible module does.

Procedure
  1. Because the run --local command reads from the ./watches.yaml file, there are options available to the Operator author. If role is left alone (by default, /opt/ansible/roles/<name>) you must copy the role over to the /opt/ansible/roles/ directory from the Operator directly.

    This is cumbersome because changes are not reflected from the current directory. Instead, change the role field to point to the current directory and comment out the existing line:

    - version: v1alpha1
      group: foo.example.com
      kind: Foo
      #  role: /opt/ansible/roles/Foo
      role: /home/user/foo-operator/Foo
  2. Create a Custom Resource Definiton (CRD) and proper role-based access control (RBAC) definitions for the Custom Resource (CR) Foo. The operator-sdk command autogenerates these files inside of the deploy/ directory:

    $ oc create -f deploy/crds/foo_v1alpha1_foo_crd.yaml
    $ oc create -f deploy/service_account.yaml
    $ oc create -f deploy/role.yaml
    $ oc create -f deploy/role_binding.yaml
  3. Run the run --local command:

    $ operator-sdk run --local
    [...]
    INFO[0000] Starting to serve on 127.0.0.1:8888
    INFO[0000] Watching foo.example.com/v1alpha1, Foo, default
  4. Now that the Operator is watching the resource Foo for events, the creation of a CR triggers your Ansible role to execute. View the deploy/cr.yaml file:

    apiVersion: "foo.example.com/v1alpha1"
    kind: "Foo"
    metadata:
      name: "example"

    Because the spec field is not set, Ansible is invoked with no extra variables. The next section covers how extra variables are passed from a CR to Ansible. This is why it is important to set sane defaults for the Operator.

  5. Create a CR instance of Foo with the default variable state set to present:

    $ oc create -f deploy/cr.yaml
  6. Check that the namespace test was created:

    $ oc get namespace
    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d
    test          Active    3s
  7. Modify the deploy/cr.yaml file to set the state field to absent:

    apiVersion: "foo.example.com/v1alpha1"
    kind: "Foo"
    metadata:
      name: "example"
    spec:
      state: "absent"
  8. Apply the changes and confirm that the namespace is deleted:

    $ oc apply -f deploy/cr.yaml
    
    $ oc get namespace
    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d

Testing an Ansible-based Operator on a cluster

After getting familiar running Ansible logic inside of an Ansible-based Operator locally, you can test the Operator inside of a Pod on a Kubernetes cluster, such as OpenShift Container Platform. Running as a Pod on a cluster is preferred for production use.

Procedure
  1. Build the foo-operator image and push it to a registry:

    $ operator-sdk build quay.io/example/foo-operator:v0.0.1
    $ podman push quay.io/example/foo-operator:v0.0.1
  2. deployment manifests are generated in the deploy/operator.yaml file. The deployment image in this file must be modified from the placeholder REPLACE_IMAGE to the previously-built image. To do so, run the following command:

    $ sed -i 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml

    If you are performing these steps on OSX, use the following command instead:

    $ sed -i "" 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml
  3. Deploy the foo-operator:

    $ oc create -f deploy/crds/foo_v1alpha1_foo_crd.yaml # if CRD doesn't exist already
    $ oc create -f deploy/service_account.yaml
    $ oc create -f deploy/role.yaml
    $ oc create -f deploy/role_binding.yaml
    $ oc create -f deploy/operator.yaml
  4. Verify that the foo-operator is up and running:

    $ oc get deployment
    NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    foo-operator       1         1         1            1           1m

Managing Custom Resource status using the operator_sdk.util Ansible collection

Ansible-based Operators automatically update Custom Resource (CR) status subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:

status:
  conditions:
    - ansibleResult:
      changed: 3
      completion: 2018-12-03T13:45:57.13329
      failures: 1
      ok: 6
      skipped: 0
    lastTransitionTime: 2018-12-03T13:45:57Z
    message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno
      113] No route to host>'
    reason: Failed
    status: "True"
    type: Failure
  - lastTransitionTime: 2018-12-03T13:46:13Z
    message: Running reconciliation
    reason: Running
    status: "True"
    type: Running

Ansible-based Operators also allow Operator authors to supply custom status values with the k8s_status Ansible module, which is included in the operator_sdk util collection. This allows the author to update the status from within Ansible with any key-value pair as desired.

By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.

Procedure
  1. To track CR status manually from your application, update the Watches file with a manageStatus field set to false:

    - version: v1
      group: api.example.com
      kind: Foo
      role: Foo
      manageStatus: false
  2. Then, use the operator_sdk.util.k8s_status Ansible module to update the subresource. For example, to update with key foo and value bar, operator_sdk.util can be used as shown:

    - operator_sdk.util.k8s_status:
        api_version: app.example.com/v1
        kind: Foo
        name: "{{ meta.name }}"
        namespace: "{{ meta.namespace }}"
        status:
          foo: bar

Collections can also be declared in the role’s meta/main.yml, which is included for new scaffolded Ansible Operators.

collections:
  - operator_sdk.util

Declaring collections in the role meta allows you to invoke the k8s_status module directly:

+

k8s_status:
  <snip>
  status:
    foo: bar
Additional resources

Additional resources