This is a cache of https://docs.openshift.com/container-platform/3.10/install_config/storage_examples/gluster_dynamic_example.html. It is a snapshot of the page at 2024-11-25T02:32:21.508+0000.
Complete Example Using GlusterFS for Dynamic Provisioning - Persistent Storage Examples | Configuring Clusters | OpenShift Container Platform 3.10
×

Overview

This topic provides an end-to-end example of how to use an existing converged mode, independent mode, or standalone Red Hat Gluster Storage cluster as dynamic persistent storage for OpenShift Container Platform. It is assumed that a working Red Hat Gluster Storage cluster is already set up. For help installing converged mode or independent mode, see Persistent Storage Using Red Hat Gluster Storage. For standalone Red Hat Gluster Storage, consult the Red Hat Gluster Storage Administration Guide.

All oc commands are executed on the OpenShift Container Platform master host.

Prerequisites

To access GlusterFS volumes, the mount.glusterfs command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:

# yum install glusterfs-fuse

This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage if your servers use x86_64 architecture. To do this, the following RPM repository must be enabled:

# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms

If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:

# yum update glusterfs-fuse

By default, SELinux does not allow writing from a pod to a remote Red Hat Gluster Storage server. To enable writing to Red Hat Gluster Storage volumes with SELinux on, run the following on each node running GlusterFS:

$ sudo setsebool -P virt_sandbox_use_fusefs on (1)
$ sudo setsebool -P virt_use_fusefs on
1 The -P option makes the boolean persistent between reboots.

The virt_sandbox_use_fusefs boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.

If you use Atomic Host, the SELinux booleans are cleared when you upgrade Atomic Host. When you upgrade Atomic Host, you must set these boolean values again.

Dynamic Provisioning

  1. To enable dynamic provisioning, first create a StorageClass object definition. The definition below is based on the minimum requirements needed for this example to work with OpenShift Container Platform. See Dynamic Provisioning and Creating Storage Classes for additional parameters and specification definitions.

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: glusterfs
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://10.42.0.0:8080" (1)
      restauthenabled: "false" (2)
    1 The heketi server URL.
    2 Since authentication is not turned on in this example, set to false.
  2. From the OpenShift Container Platform master host, create the StorageClass:

    # oc create -f gluster-storage-class.yaml
    storageclass "glusterfs" created
  3. Create a PVC using the newly-created StorageClass. For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: gluster1
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 30Gi
      storageClassName: glusterfs
  4. From the OpenShift Container Platform master host, create the PVC:

    # oc create -f glusterfs-dyn-pvc.yaml
    persistentvolumeclaim "gluster1" created
  5. View the PVC to see that the volume was dynamically created and bound to the PVC:

    # oc get pvc
    NAME       STATUS   VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
    gluster1   Bound    pvc-78852230-d8e2-11e6-a3fa-0800279cf26f   30Gi       RWX           glusterfs      42s

Using the Storage

At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now utilize this PVC in a pod.

  1. Create the pod object definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: hello-openshift-pod
      labels:
        name: hello-openshift-pod
    spec:
      containers:
      - name: hello-openshift-pod
        image: openshift/hello-openshift
        ports:
        - name: web
          containerPort: 80
        volumeMounts:
        - name: gluster-vol1
          mountPath: /usr/share/nginx/html
          readOnly: false
      volumes:
      - name: gluster-vol1
        persistentVolumeClaim:
          claimName: gluster1 (1)
    1 The name of the PVC created in the previous step.
  2. From the OpenShift Container Platform master host, create the pod:

    # oc create -f hello-openshift-pod.yaml
    pod "hello-openshift-pod" created
  3. View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:

    # oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    hello-openshift-pod                          1/1       Running   0          9m        10.38.0.0        node1
  4. oc exec into the container and create an index.html file in the mountPath definition of the pod:

    $ oc exec -ti hello-openshift-pod /bin/sh
    $ cd /usr/share/nginx/html
    $ echo 'Hello OpenShift!!!' > index.html
    $ ls
    index.html
    $ exit
  5. Now curl the URL of the pod:

    # curl http://10.38.0.0
    Hello OpenShift!!!
  6. Delete the pod, recreate it, and wait for it to come up:

    # oc delete pod hello-openshift-pod
    pod "hello-openshift-pod" deleted
    # oc create -f hello-openshift-pod.yaml
    pod "hello-openshift-pod" created
    # oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    hello-openshift-pod                          1/1       Running   0          9m        10.37.0.0        node1
  7. Now curl the pod again and it should still have the same data as before. Note that its IP address may have changed:

    # curl http://10.37.0.0
    Hello OpenShift!!!
  8. Check that the index.html file was written to GlusterFS storage by doing the following on any of the nodes:

    $ mount | grep heketi
    /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    
    $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick
    $ ls
    index.html
    $ cat index.html
    Hello OpenShift!!!