$ oc new-project <project-name>
This example provides information about the integration, deployment, and management of GlusterFS containerized storage nodes by using Heketi running on OpenShift Container Platform.
This example:
Shows how to install and configure a Heketi server on OpenShift to perform dynamic provisioning.
Assumes you have familiarity with Kubernetes and the Kubernetes Persistent Storage model.
Assumes you have access to an existing, dedicated GlusterFS cluster that has raw devices available for consumption and management by a Heketi server. If you do not have this, you can create a three node cluster using your virtual machine solution of choice. Ensure sure you create a few raw devices and give plenty of space (at least 100GB recommended). See Red Hat Gluster Storage Installation Guide.
This example uses the following environment and prerequisites:
GlusterFS cluster running Red Hat Gluster Storage (RHGS) 3.1. Three nodes, each with at least two 100GB RAW devices:
gluster23.rhs (192.168.1.200)
gluster24.rhs (192.168.1.201)
gluster25.rhs (192.168.1.202)
This example uses an all-in-one OpenShift Container Platform cluster (master and node on a single host), though it can work using a standard, multi-node cluster as well.
k8dev2.rhs (192.168.1.208)
Heketi is used to manage the Gluster cluster storage (adding volumes, removing volumes, etc.). Download deploy-heketi-template
to install Heketi on OpenShift.
This template file places the database in an EmptyDir volume. Adjust the database accordingly for a reliable persistent storage. |
Create a new project:
$ oc new-project <project-name>
Enable privileged containers in the new project:
$ oc adm policy add-scc-to-user privileged -z default
Register the deploy-heketi
template:
$ oc create -f <template-path>/deploy-heketi-template
Deploy the bootstrap Heketi container:
$ oc process deploy-heketi -v \
HEKETI_KUBE_NAMESPACE=<project-name> \
HEKETI_KUBE_APIHOST=<master-url-and-port> \
HEKETI_KUBE_INSECURE=y \
HEKETI_KUBE_USER=<cluster-admin-username> \
HEKETI_KUBE_PASSWORD=<cluster-admin-password> | oc create -f -
Wait until the deploy-heketi
pod starts and all services are running. Then get Heketi service details:
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deploy-heketi 172.30.96.173 <none> 8080/TCP 2m
Check if Heketi services are running properly, it must return Hello from Heketi
.
$ curl http://<cluster-ip>:8080/hello
Hello from Heketi
Set an environment variable for the Heketi server:
$ export HEKETI_cli_SERVER=http://<cluster-ip>:8080
Topology is used to tell Heketi about the environment and what nodes and devices it will manage.
Heketi is currently limited to managing raw devices only. If a device is already a Gluster volume, it is skipped and ignored. |
Create and load the topology file. There is a sample file located in /usr/share/heketi/topology-sample.json by default, or /etc/heketi depending on how it was installed.
Depending upon your method of installation this file may not exist. If it is missing, manually create the topology-sample.json file, as shown in the following example. |
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"gluster23.rhs"
],
"storage": [
"192.168.1.200"
]
},
"zone": 1
},
"devices": [
"/dev/sde",
"/dev/sdf"
]
},
{
"node": {
"hostnames": {
"manage": [
"gluster24.rhs"
],
"storage": [
"192.168.1.201"
]
},
"zone": 1
},
"devices": [
"/dev/sde",
"/dev/sdf"
]
},
{
"node": {
"hostnames": {
"manage": [
"gluster25.rhs"
],
"storage": [
"192.168.1.202"
]
},
"zone": 1
},
"devices": [
"/dev/sde",
"/dev/sdf"
]
}
]
}
]
}
Run the following command to load the topology of your environment.
$ heketi-cli topology load --json=topology-sample.json
Found node gluster23.rhs on cluster bdf9d8ca3fa269ff89854faf58f34b9a
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
Creating node gluster24.rhs ... ID: 8e677d8bebe13a3f6846e78a67f07f30
Adding device /dev/sde ... OK
Adding device /dev/sdf ... OK
...
Create a Gluster volume to verify Heketi:
$ heketi-cli volume create --size=50
View the volume information from one of the the Gluster nodes:
$ gluster volume info
Volume Name: vol_335d247ac57ecdf40ac616514cc6257f (1)
Type: Distributed-Replicate
Volume ID: 75be7940-9b09-4e7f-bfb0-a7eb24b411e3
Status: Started
...
1 | Volume created by heketi-cli . |
If you installed OpenShift Container Platform by using the BYO (Bring your own) OpenShift Ansible inventory configuration files for either native or external GlusterFS instance, the GlusterFS StorageClass automatically get created during the installation. For such cases you can skip the following storage class creation steps and directly proceed with creating persistent volume claim instruction. |
Create a StorageClass
object definition. The following definition is based on the
minimum requirements needed for this example to work with OpenShift Container Platform. See
Dynamic
Provisioning and Creating Storage Classes for additional parameters and
specification definitions.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gluster-dyn
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://glusterclient2.rhs:8080" (1)
restauthenabled: "false" (2)
1 | The Heketi server from the HEKETI_cli_SERVER environment variable. |
2 | Since authentication is not turned on in this example, set to false . |
From the OpenShift Container Platform master host, create the storage class:
$ oc create -f glusterfs-storageclass1.yaml
storageclass "gluster-dyn" created
Create a persistent volume claim (PVC), requesting the newly-created storage class. For example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-dyn-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 30Gi
storageClassName: gluster-dyn
From the OpenShift Container Platform master host, create the PVC:
$ oc create -f glusterfs-pvc-storageclass.yaml
persistentvolumeclaim "gluster-dyn-pvc" created
View the PVC to see that the volume was dynamically created and bound to the PVC:
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
gluster-dyn-pvc Bound pvc-78852230-d8e2-11e6-a3fa-0800279cf26f 30Gi RWX gluster-dyn 42s
Verify and view the new volume on one of the Gluster nodes:
$ gluster volume info
Volume Name: vol_335d247ac57ecdf40ac616514cc6257f (1)
Type: Distributed-Replicate
Volume ID: 75be7940-9b09-4e7f-bfb0-a7eb24b411e3
Status: Started
...
Volume Name: vol_f1404b619e6be6ef673e2b29d58633be (2)
Type: Distributed-Replicate
Volume ID: 7dc234d0-462f-4c6c-add3-fb9bc7e8da5e
Status: Started
Number of Bricks: 2 x 2 = 4
...
1 | Volume created by heketi-cli . |
2 | New dynamically created volume triggered by Kubernetes and the storage class. |
At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now now utilize this PVC in a pod. In this example, create a simple NGINX pod.
Create the pod object definition:
apiVersion: v1
kind: Pod
metadata:
name: gluster-pod1
labels:
name: gluster-pod1
spec:
containers:
- name: gluster-pod1
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- name: web
containerPort: 80
securityContext:
privileged: true
volumeMounts:
- name: gluster-vol1
mountPath: /usr/share/nginx/html
volumes:
- name: gluster-vol1
persistentVolumeClaim:
claimName: gluster-dyn-pvc (1)
1 | The name of the PVC created in the previous step. |
From the OpenShift Container Platform master host, create the pod:
$ oc create -f nginx-pod.yaml
pod "gluster-pod1" created
View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:
$ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
gluster-pod1 1/1 Running 0 9m 10.38.0.0 node1
Now remote into the container with oc exec
and create an index.html file:
$ oc exec -ti gluster-pod1 /bin/sh
$ cd /usr/share/nginx/html
$ echo 'Hello World from GlusterFS!!!' > index.html
$ ls
index.html
$ exit
Now curl
the URL of the pod:
$ curl http://10.38.0.0
Hello World from GlusterFS!!!