# yum install -y glusterfs-fuse
This topic provides an end-to-end example of how to use an existing Gluster cluster as an OpenShift Container Platform persistent store. It is assumed that a working Gluster cluster is already set up. If not, consult the Red Hat Gluster Storage Administration Guide.
Persistent Storage Using GlusterFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using GlusterFS as persistent storage.
For an end-to-end example of how to dynamically provision GlusterFS volumes, see Complete Example of Dynamic Provisioning Using GlusterFS. The persistent volume (PV) and endpoints are both created dynamically by GlusterFS.
All |
The glusterfs-fuse library must be installed on all schedulable OpenShift Container Platform nodes:
# yum install -y glusterfs-fuse
The OpenShift Container Platform all-in-one host is often not used to run pod workloads and, thus, is not included as a schedulable node. |
The named endpoints define each node in the Gluster-trusted storage pool:
apiVersion: v1
kind: Endpoints
metadata:
name: gluster-cluster (1)
subsets:
- addresses: (2)
- ip: 192.168.122.21
ports: (3)
- port: 1
protocol: TCP
- addresses:
- ip: 192.168.122.22
ports:
- port: 1
protocol: TCP
1 | The endpoints name. If using a service, then the endpoints name must match the service name. |
2 | An array of IP addresses for each node in the Gluster pool. Currently, host names are not supported. |
3 | The port numbers are ignored, but must be legal port numbers. The value 1 is commonly used. |
Save the endpoints definition to a file, for example gluster-endpoints.yaml, then create the endpoints object:
# oc create -f gluster-endpoints.yaml endpoints "gluster-cluster" created
Verify that the endpoints were created:
# oc get endpoints gluster-cluster NAME ENDPOINTS AGE gluster-cluster 192.168.122.21:1,192.168.122.22:1 1m
To persist the Gluster endpoints, you also need to create a service. |
Endpoints are name-spaced. Each project accessing the Gluster volume needs its own endpoints. |
apiVersion: v1
kind: Service
metadata:
name: gluster-cluster (1)
spec:
ports:
- port: 1 (2)
1 | The name of the service. If using a service, then the endpoints name must match the service name. |
2 | The port should match the same port used in the endpoints. |
Save the service definition to a file, for example gluster-service.yaml, then create the endpoints object:
# oc create -f gluster-service.yaml endpoints "gluster-cluster" created
Verify that the service was created:
# oc get service gluster-cluster NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE gluster-cluster 10.0.0.130 <none> 1/TCP 9s
Next, before creating the PV object, define the persistent volume in OpenShift Container Platform:
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-pv (1)
spec:
capacity:
storage: 1Gi (2)
accessModes:
- ReadWriteMany (3)
glusterfs: (4)
endpoints: gluster-cluster (5)
path: /HadoopVol (6)
readOnly: false
persistentVolumeReclaimPolicy: Retain (7)
1 | The name of the PV, which is referenced in pod definitions or displayed in
various oc volume commands. |
2 | The amount of storage allocated to this volume. |
3 | accessModes are used as labels to match a PV and a PVC. They currently
do not define any form of access control. |
4 | This defines the volume type being used. In this case, the glusterfs plug-in is defined. |
5 | This references the endpoints named above. |
6 | This is the Gluster volume name, preceded by / . |
7 | The volume reclaim policy Retain indicates that the volume will be
preserved after the pods accessing it terminates. For GlusterFS, the accepted
values include Retain , and Delete . |
Save the PV definition to a file, for example gluster-pv.yaml, and create the persistent volume:
# oc create -f gluster-pv.yaml persistentvolume "gluster-pv" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-pv <none> 1Gi RWX Available 37s
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim (1)
spec:
accessModes:
- ReadWriteMany (2)
resources:
requests:
storage: 1Gi (3)
1 | The claim name is referenced by the pod under its volumes section. |
2 | As mentioned above for PVs, the accessModes do not enforce access rights,
but rather act as labels to match a PV to a PVC. |
3 | This claim will look for PVs offering 1Gi or greater capacity. |
Save the PVC definition to a file, for example gluster-claim.yaml, and create the PVC:
# oc create -f gluster-claim.yaml persistentvolumeclaim "gluster-claim" created
Verify the PVC was created and bound to the expected PV:
# oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-claim <none> Bound gluster-pv 1Gi RWX 24s (1)
1 | The claim was bound to the gluster-pv PV. |
Access is necessary to a node in the Gluster-trusted storage pool. On this node, examine the glusterfs-fuse mount:
# ls -lZ /mnt/glusterfs/ drwxrwx---. yarn hadoop system_u:object_r:fusefs_t:s0 HadoopVol # id yarn uid=592(yarn) gid=590(hadoop) groups=590(hadoop) (1) (2)
1 | The owner has ID 592. |
2 | The group has ID 590. |
In order to access the HadoopVol volume, the container must match the SELinux label, and either run with a UID of 592, or with 590 in its supplemental groups. It is recommended to gain access to the volume by matching the Gluster mount’s groups, which is defined in the pod definition below.
By default, SELinux does not allow writing from a pod to a remote Gluster server. To enable writing to GlusterFS volumes with SELinux enforcing on each node, run:
# setsebool -P virt_sandbox_use_fusefs on
The |
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Gluster volume for read-write access:
The NGINX image may require to run in privileged mode to create the mount and run properly. An easy way to accomplish this is to simply add your user to the privileged Security Context Constraint (SCC): $ oc adm policy add-scc-to-user privileged myuser Then, add the privileged: true to the containers Managing Security Context Constraints provides additional information regarding SCCs. |
apiVersion: v1
kind: Pod
metadata:
name: gluster-pod1
labels:
name: gluster-pod1 (1)
spec:
containers:
- name: gluster-pod1
image: nginx (2)
ports:
- name: web
containerPort: 80
securityContext:
privileged: true
volumeMounts:
- name: gluster-vol1 (3)
mountPath: /usr/share/nginx/html (4)
readOnly: false
securityContext:
supplementalGroups: [590] (5)
volumes:
- name: gluster-vol1 (3)
persistentVolumeClaim:
claimName: gluster-claim (6)
1 | The name of this pod as displayed by oc get pod . |
2 | The image run by this pod. In this case, we are using a standard NGINX image. |
3 | The name of the volume. This name must be the same in both the
containers and volumes sections. |
4 | The mount path as seen in the container. |
5 | The SupplementalGroup ID (Linux Groups) to be assigned at the pod level
and as discussed this should match the POSIX permissions on the Gluster volume. |
6 | The PVC that is bound to the Gluster cluster. |
Save the pod definition to a file, for example gluster-pod1.yaml, and create the pod:
# oc create -f gluster-pod1.yaml pod "gluster-pod1" created
Verify the pod was created:
# oc get pod NAME READY STATUS RESTARTS AGE gluster-pod1 1/1 Running 0 31s (1)
1 | After a minute or so, the pod will be in the Running state. |
More details are shown in the oc describe pod
command:
# oc describe pod gluster-pod1 Name: gluster-pod1 Namespace: default (1) Security Policy: privileged Node: ose1.rhs/192.168.122.251 Start Time: Wed, 24 Aug 2016 12:37:45 -0400 Labels: name=gluster-pod1 Status: Running IP: 172.17.0.2 (2) Controllers: <none> Containers: gluster-pod1: Container ID: docker://e67ed01729e1dc7369c5112d07531a27a7a02a7eb942f17d1c5fce32d8c31a2d Image: nginx Image ID: docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b Port: 80/TCP State: Running Started: Wed, 24 Aug 2016 12:37:52 -0400 Ready: True Restart Count: 0 Volume Mounts: /usr/share/nginx/html/test from glustervol (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-1n70u (ro) Environment Variables: <none> Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: glustervol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: gluster-claim (3) ReadOnly: false default-token-1n70u: Type: Secret (a volume populated by a Secret) SecretName: default-token-1n70u QoS Tier: BestEffort Events: (4) FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 10s 10s 1 {default-scheduler } Normal Scheduled Successfully assigned gluster-pod1 to ose1.rhs 9s 9s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Pulling pulling image "nginx" 4s 4s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Pulled Successfully pulled image "nginx" 3s 3s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Created Created container with docker id e67ed01729e1 3s 3s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Started Started container with docker id e67ed01729e1
1 | The project (namespace) name. |
2 | The IP address of the OpenShift Container Platform node running the pod. |
3 | The PVC name used by the pod. |
4 | The list of events resulting in the pod being launched and the Gluster volume being mounted. |
There is more internal information, including the SCC used to authorize the pod,
the pod’s user and group IDs, the SELinux label, and more shown in the oc get pod <name> -o yaml
command:
# oc get pod gluster-pod1 -o yaml apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: privileged (1) creationTimestamp: 2016-08-24T16:37:45Z labels: name: gluster-pod1 name: gluster-pod1 namespace: default (2) resourceVersion: "482" selfLink: /api/v1/namespaces/default/pods/gluster-pod1 uid: 15afda77-6a19-11e6-aadb-525400f7256d spec: containers: - image: nginx imagePullPolicy: Always name: gluster-pod1 ports: - containerPort: 80 name: web protocol: TCP resources: {} securityContext: privileged: true (3) terminationMessagePath: /dev/termination-log volumeMounts: - mountPath: /usr/share/nginx/html name: glustervol - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-1n70u readOnly: true dnsPolicy: ClusterFirst host: ose1.rhs imagePullsecrets: - name: default-dockercfg-20xg9 nodeName: ose1.rhs restartPolicy: Always securityContext: supplementalGroups: - 590 (4) serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: glustervol persistentVolumeClaim: claimName: gluster-claim (5) - name: default-token-1n70u secret: secretName: default-token-1n70u status: conditions: - lastProbeTime: null lastTransitionTime: 2016-08-24T16:37:45Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2016-08-24T16:37:53Z status: "True" type: Ready - lastProbeTime: null lastTransitionTime: 2016-08-24T16:37:45Z status: "True" type: PodScheduled containerStatuses: - containerID: docker://e67ed01729e1dc7369c5112d07531a27a7a02a7eb942f17d1c5fce32d8c31a2d image: nginx imageID: docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b lastState: {} name: gluster-pod1 ready: true restartCount: 0 state: running: startedAt: 2016-08-24T16:37:52Z hostIP: 192.168.122.251 phase: Running podIP: 172.17.0.2 startTime: 2016-08-24T16:37:45Z
1 | The SCC used by the pod. |
2 | The project (namespace) name. |
3 | The security context level requested, in this case privileged |
4 | The supplemental group ID for the pod (all containers). |
5 | The PVC name used by the pod. |