$ sudo dnf install ansible
To manage the lifecycle of your application on Kubernetes using Ansible, you can use the Kubernetes Collection for Ansible. This collection of Ansible modules allows a developer to either leverage their existing Kubernetes resource files written in YAML or express the lifecycle management in native Ansible.
The Red Hat-supported version of the Operator SDK CLI tool, including the related scaffolding and testing tools for Operator projects, is deprecated and is planned to be removed in a future release of Red Hat OpenShift Service on AWS. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed from future Red Hat OpenShift Service on AWS releases. The Red Hat-supported version of the Operator SDK is not recommended for creating new Operator projects. Operator authors with existing Operator projects can use the version of the Operator SDK CLI tool released with Red Hat OpenShift Service on AWS to maintain their projects and create Operator releases targeting newer versions of Red Hat OpenShift Service on AWS. The following related base images for Operator projects are not deprecated. The runtime functionality and configuration APIs for these base images are still supported for bug fixes and for addressing CVEs.
For information about the unsupported, community-maintained, version of the Operator SDK, see Operator SDK (Operator Framework). |
One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.
This section goes into detail on usage of the Kubernetes Collection. To get started, install the collection on your local workstation and test it using a playbook before moving on to using it within an Operator.
You can install the Kubernetes Collection for Ansible on your local workstation.
Install Ansible 2.15+:
$ sudo dnf install ansible
Install the Python Kubernetes client package:
$ pip install kubernetes
Install the Kubernetes Collection using one of the following methods:
You can install the collection directly from Ansible Galaxy:
$ ansible-galaxy collection install community.kubernetes
If you have already initialized your Operator, you might have a requirements.yml
file at the top level of your project. This file specifies Ansible dependencies that must be installed for your Operator to function. By default, this file installs the community.kubernetes
collection as well as the operator_sdk.util
collection, which provides modules and plugins for Operator-specific functions.
To install the dependent modules from the requirements.yml
file:
$ ansible-galaxy collection install -r requirements.yml
Operator developers can run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.
Initialize an Ansible-based Operator project and create an API that has a generated Ansible role by using the Operator SDK
Install the Kubernetes Collection for Ansible
In your Ansible-based Operator project directory, modify the roles/<kind>/tasks/main.yml
file with the Ansible logic that you want. The roles/<kind>/
directory is created when you use the --generate-role
flag while creating an API. The <kind>
replaceable matches the kind that you specified for the API.
The following example creates and deletes a config map based on the value of a variable named state
:
---
- name: set configmap example-config to {{ state }}
community.kubernetes.k8s:
api_version: v1
kind: configmap
name: example-config
namespace: <operator_namespace> (1)
state: "{{ state }}"
ignore_errors: true (2)
1 | Specify the namespace where you want the config map created. |
2 | Setting ignore_errors: true ensures that deleting a nonexistent config map does not fail. |
Modify the roles/<kind>/defaults/main.yml
file to set state
to present
by default:
---
state: present
Create an Ansible playbook by creating a playbook.yml
file in the top-level of your project directory, and include your <kind>
role:
---
- hosts: localhost
roles:
- <kind>
Run the playbook:
$ ansible-playbook playbook.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ********************************************************************************
TASK [Gathering Facts] ********************************************************************************
ok: [localhost]
TASK [memcached : set configmap example-config to present] ********************************************************************************
changed: [localhost]
PLAY RECAP ********************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Verify that the config map was created:
$ oc get configmaps
NAME DATA AGE
example-config 0 2m1s
Rerun the playbook setting state
to absent
:
$ ansible-playbook playbook.yml --extra-vars state=absent
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ********************************************************************************
TASK [Gathering Facts] ********************************************************************************
ok: [localhost]
TASK [memcached : set configmap example-config to absent] ********************************************************************************
changed: [localhost]
PLAY RECAP ********************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Verify that the config map was deleted:
$ oc get configmaps
See Using Ansible inside an Operator for details on triggering your custom Ansible logic inside of an Operator when a custom resource (CR) changes.