This is a cache of https://docs.openshift.com/container-platform/3.6/install_config/storage_examples/dedicated_gluster_dynamic_example.html. It is a snapshot of the page at 2024-11-27T03:50:34.448+0000.
Dynamic Provisioning Example Using Dedicated GlusterFS - Persistent Storage Examples | Installation and Configuration | OpenShift Container Platform 3.6
×

Overview

This example assumes a functioning OpenShift Container Platform cluster along with Heketi and GlusterFS. All oc commands are executed on the OpenShift Container Platform master host.

Container Native Storage (CNS) using GlusterFS and Heketi is a great way to perform dynamic provisioning for shared filesystems in a Kubernetes-based cluster like OpenShift Container Platform. However, if an existing, dedicated Gluster cluster is available external to the OpenShift Container Platform cluster, you can also provision storage from it rather than a containerized GlusterFS implementation.

This example:

  • Shows how simple it is to install and configure a Heketi server to work with OpenShift Container Platform to perform dynamic provisioning.

  • Assumes some familiarity with Kubernetes and the Kubernetes Persistent Storage model.

  • Assumes you have access to an existing, dedicated GlusterFS cluster that has raw devices available for consumption and management by a Heketi server. If you do not have this, you can create a three node cluster using your virtual machine solution of choice. Ensure sure you create a few raw devices and give plenty of space (at least 100GB recommended). See Red Hat Gluster Storage Installation Guide.

Environment and Prerequisites

This example uses the following environment and prerequisites:

  • GlusterFS cluster running Red Hat Gluster Storage (RHGS) 3.1. Three nodes, each with at least two 100GB RAW devices:

    • gluster23.rhs (192.168.1.200)

    • gluster24.rhs (192.168.1.201)

    • gluster25.rhs (192.168.1.202)

  • Heketi service/client node running Red Hat Enterprise Linux (RHEL) 7.x or RHGS 3.1. Heketi can be installed on one of the Gluster nodes:

    • glusterclient2.rhs (192.168.1.203)

  • OpenShift Container Platform node. This example uses an all-in-one OpenShift Container Platform cluster (master and node on a single host), though it can work using a standard, multi-node cluster as well.

    • k8dev2.rhs (192.168.1.208)

Installing and Configuring Heketi

Heketi is used to manage the Gluster cluster storage (adding volumes, removing volumes, etc.). As stated, this can be RHEL or RHGS, and can be installed on one of the existing Gluster storage nodes. This example uses a stand-alone RHGS 3.1 node running Heketi.

The Red Hat Gluster Storage Administration Guide can be used a reference during this process.

  1. Install Heketi and the Heketi client. From the host designated to run Heketi and the Heketi client, run:

    # yum install heketi heketi-client -y

    The Heketi server can be any of the existing hosts, though typically this will be the OpenShift Container Platform master host. This example, however, uses a separate host not part of the GlusterFS or OpenShift Container Platform cluster.

  2. Create and install Heketi private keys on each GlusterFS cluster node. From the host that is running Heketi:

    # ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
    # ssh-copy-id -i /etc/heketi/heketi_key.pub root@gluster23.rhs
    # ssh-copy-id -i /etc/heketi/heketi_key.pub root@gluster24.rhs
    # ssh-copy-id -i /etc/heketi/heketi_key.pub root@gluster25.rhs
    # chown heketi:heketi /etc/heketi/heketi_key*
  3. Edit the /etc/heketi/heketi.json file to setup the SSH executor. Below is an excerpt from the /etc/heketi/heketi.json file; the parts to configure are the executor and SSH sections:

    	"executor": "ssh", (1)
    
    	"_sshexec_comment": "SSH username and private key file information",
    	"sshexec": {
      	  "keyfile": "/etc/heketi/heketi_key", (2)
      	  "user": "root", (3)
      	  "port": "22", (4)
      	  "fstab": "/etc/fstab" (5)
    	},
    1 Change executor from mock to ssh.
    2 Add in the public key directory specified in previous step.
    3 Update user to a user that has sudo or root access.
    4 Set port to 22 and remove all other text.
    5 Set fstab to the default, /etc/fstab and remove all other text.
  4. Restart and enable service:

    # systemctl restart heketi
    # systemctl enable heketi
  5. Test the connection to Heketi:

    # curl http://glusterclient2.rhs:8080/hello
    Hello from Heketi
  6. Set an environment variable for the Heketi server:

    # export HEKETI_cli_SERVER=http://glusterclient2.rhs:8080

Loading Topology

Topology is used to tell Heketi about the environment and what nodes and devices it will manage.

Heketi is currently limited to managing raw devices only. If a device is already a Gluster volume, it will be skipped and ignored.

  1. Create and load the topology file. There is a sample file located in /usr/share/heketi/topology-sample.json by default, or /etc/heketi depending on how it was installed.

    {
      "clusters": [
        {
          "nodes": [
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "gluster23.rhs"
                  ],
                  "storage": [
                    "192.168.1.200"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sde",
                "/dev/sdf"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "gluster24.rhs"
                  ],
                  "storage": [
                    "192.168.1.201"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sde",
                "/dev/sdf"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "gluster25.rhs"
                  ],
                  "storage": [
                    "192.168.1.202"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sde",
                "/dev/sdf"
              ]
            }
          ]
        }
      ]
    }
  2. Using heketi-cli, run the following command to load the topology of your environment.

    # heketi-cli topology load --json=topology.json
    
        	Found node gluster23.rhs on cluster bdf9d8ca3fa269ff89854faf58f34b9a
       		Adding device /dev/sde ... OK
       	 	Adding device /dev/sdf ... OK
        	Creating node gluster24.rhs ... ID: 8e677d8bebe13a3f6846e78a67f07f30
       	 	Adding device /dev/sde ... OK
       	 	Adding device /dev/sdf ... OK
    ...
    ...
  3. Create a Gluster volume to verify Heketi:

    # heketi-cli volume create --size=50
  4. View the volume information from one of the the Gluster nodes:

    # gluster volume info
    
    	Volume Name: vol_335d247ac57ecdf40ac616514cc6257f (1)
    	Type: Distributed-Replicate
    	Volume ID: 75be7940-9b09-4e7f-bfb0-a7eb24b411e3
    	Status: Started
    ...
    ...
    1 Volume created by heketi-cli.

Dynamically Provision a Volume

  1. Create a StorageClass object definition. The definition below is based on the minimum requirements needed for this example to work with OpenShift Container Platform. See Dynamic Provisioning and Creating Storage Classes for additional parameters and specification definitions.

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: gluster-dyn
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://glusterclient2.rhs:8080" (1)
      restauthenabled: "false" (2)
    1 The Heketi server from the HEKETI_cli_SERVER environment variable.
    2 Since authentication is not turned on in this example, set to false.
  2. From the OpenShift Container Platform master host, create the storage class:

    # oc create -f glusterfs-storageclass1.yaml
    storageclass "gluster-dyn" created
  3. Create a persistent volume claim (PVC), requesting the newly-created storage class. For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
     name: gluster-dyn-pvc
    spec:
     accessModes:
      - ReadWriteMany
     resources:
       requests:
            storage: 30Gi
     storageClassName: gluster-dyn
  4. From the OpenShift Container Platform master host, create the PVC:

    # oc create -f glusterfs-pvc-storageclass.yaml
    persistentvolumeclaim "gluster-dyn-pvc" created
  5. View the PVC to see that the volume was dynamically created and bound to the PVC:

    # oc get pvc
    NAME          	STATUS	VOLUME                                 		CAPACITY   	ACCESSMODES   	STORAGECLASS   	AGE
    gluster-dyn-pvc Bound	pvc-78852230-d8e2-11e6-a3fa-0800279cf26f   	30Gi   		RWX       	gluster-dyn	42s
  6. Verify and view the new volume on one of the Gluster nodes:

    # gluster volume info
    
    	Volume Name: vol_335d247ac57ecdf40ac616514cc6257f (1)
    	Type: Distributed-Replicate
    	Volume ID: 75be7940-9b09-4e7f-bfb0-a7eb24b411e3
    	Status: Started
            ...
    	Volume Name: vol_f1404b619e6be6ef673e2b29d58633be (2)
    	Type: Distributed-Replicate
    	Volume ID: 7dc234d0-462f-4c6c-add3-fb9bc7e8da5e
    	Status: Started
    	Number of Bricks: 2 x 2 = 4
    	...
    1 Volume created by heketi-cli.
    2 New dynamically created volume triggered by Kubernetes and the storage class.

Creating a NGINX Pod That Uses the PVC

At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now now utilize this PVC in a pod. In this example, create a simple NGINX pod.

  1. Create the pod object definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: gluster-pod1
      labels:
        name: gluster-pod1
    spec:
      containers:
      - name: gluster-pod1
        image: gcr.io/google_containers/nginx-slim:0.8
        ports:
        - name: web
          containerPort: 80
        securityContext:
          privileged: true
        volumeMounts:
        - name: gluster-vol1
          mountPath: /usr/share/nginx/html
      volumes:
      - name: gluster-vol1
        persistentVolumeClaim:
          claimName: gluster-dyn-pvc (1)
    1 The name of the PVC created in the previous step.
  2. From the OpenShift Container Platform master host, create the pod:

    # oc create -f nginx-pod.yaml
    pod "gluster-pod1" created
  3. View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:

    # oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    gluster-pod1                       1/1       Running   0          9m        10.38.0.0        node1
  4. Now remote into the container with oc exec and create an index.html file:

    # oc exec -ti gluster-pod1 /bin/sh
    $ cd /usr/share/nginx/html
    $ echo 'Hello World from GlusterFS!!!' > index.html
    $ ls
    index.html
    $ exit
  5. Now curl the URL of the pod:

    # curl http://10.38.0.0
    Hello World from GlusterFS!!!