This is a cache of https://docs.openshift.com/container-platform/3.11/install_config/storage_examples/ceph_example.html. It is a snapshot of the page at 2024-11-27T02:08:11.711+0000.
Using Ceph RBD for Persistent Storage - Persistent Storage Examples | Configuring Clusters | OpenShift Container Platform 3.11
×

Overview

This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Container Platform persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.

Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.

All oc …​ commands are executed on the OpenShift Container Platform master host.

Installing the ceph-common Package

The ceph-common library must be installed on all schedulable OpenShift Container Platform nodes:

The OpenShift Container Platform all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.

# yum install -y ceph-common

Creating the Ceph Secret

The ceph auth get-key command is run on a Ceph MON node to display the key value for the client.admin user:

Example 1. Ceph Secret Definition
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== (1)
type: kubernetes.io/rbd (2)
1 This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value.
2 This value is required for Ceph RBD to work with dynamic provisioning.

Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:

$ oc create -f ceph-secret.yaml
secret "ceph-secret" created

Verify that the secret was created:

# oc get secret ceph-secret
NAME          TYPE               DATA      AGE
ceph-secret   kubernetes.io/rbd  1         23d

Creating the Persistent Volume

Next, before creating the PV object in OpenShift Container Platform, define the persistent volume file:

Example 2. Persistent Volume Object Definition Using Ceph RBD
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv     (1)
spec:
  capacity:
    storage: 2Gi    (2)
  accessModes:
    - ReadWriteOnce (3)
  rbd:              (4)
    monitors:       (5)
      - 192.168.122.133:6789
    pool: rbd
    image: ceph-image
    user: admin
    secretRef:
      name: ceph-secret (6)
    fsType: ext4        (7)
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
1 The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
2 The amount of storage allocated to this volume.
3 accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).
4 This defines the volume type being used. In this case, the rbd plug-in is defined.
5 This is an array of Ceph monitor IP addresses and ports.
6 This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Container Platform to the Ceph server.
7 This is the file system type mounted on the Ceph RBD block device.

Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:

# oc create -f ceph-pv.yaml
persistentvolume "ceph-pv" created

Verify that the persistent volume was created:

# oc get pv
NAME                     LABELS    CAPACITY     ACCESSMODES   STATUS      CLAIM     REASON    AGE
ceph-pv                  <none>    2147483648   RWO           Available                       2s

Creating the Persistent Volume Claim

A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.

Example 3. PVC Object Definition
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:     (1)
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi (2)
1 As mentioned above for PVs, the accessModes do not enforce access right, but rather act as labels to match a PV to a PVC.
2 This claim will look for PVs offering 2Gi or greater capacity.

Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:

# oc create -f ceph-claim.yaml
persistentvolumeclaim "ceph-claim" created

#and verify the PVC was created and bound to the expected PV:
# oc get pvc
NAME         LABELS    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
ceph-claim   <none>    Bound     ceph-pv   1Gi        RWX           21s
                                 (1)
1 the claim was bound to the ceph-pv PV.

Creating the Pod

A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:

Example 4. Pod Object Definition
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1           (1)
spec:
  containers:
  - name: ceph-busybox
    image: busybox          (2)
    command: ["sleep", "60000"]
    volumeMounts:
    - name: ceph-vol1       (3)
      mountPath: /usr/share/busybox (4)
      readOnly: false
  volumes:
  - name: ceph-vol1         (3)
    persistentVolumeClaim:
      claimName: ceph-claim (5)
1 The name of this pod as displayed by oc get pod.
2 The image run by this pod. In this case, we are telling busybox to sleep.
3 The name of the volume. This name must be the same in both the containers and volumes sections.
4 The mount path as seen in the container.
5 The PVC that is bound to the Ceph RBD cluster.

Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:

# oc create -f ceph-pod1.yaml
pod "ceph-pod1" created

#verify pod was created
# oc get pod
NAME        READY     STATUS    RESTARTS   AGE
ceph-pod1   1/1       Running   0          2m
                      (1)
1 After a minute or so, the pod will be in the Running state.

Defining Group and Owner IDs (Optional)

When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using fsGroup, as shown in the following pod definition fragment:

Example 5. Group ID Pod Definition
...
spec:
  containers:
    - name:
    ...
  securityContext: (1)
    fsGroup: 7777  (2)
...
1 securityContext must be defined at the pod level, not under a specific container.
2 All containers in the pod will have the same fsGroup ID.

Setting ceph-user-secret as Default for Projects

If you would like to make the persistent storage available to every project you have to modify the default project template. You can read more on modifying the default project template. Read more on modifying the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster.

Default Project Example
...
apiVersion: v1
kind: Template
metadata:
  creationTimestamp: null
  name: project-request
objects:
- apiVersion: v1
  kind: Project
  metadata:
    annotations:
      openshift.io/description: ${PROJECT_DESCRIPTION}
      openshift.io/display-name: ${PROJECT_DISPLAYNAME}
      openshift.io/requester: ${PROJECT_REQUESTING_user}
    creationTimestamp: null
    name: ${PROJECT_NAME}
  spec: {}
  status: {}
- apiVersion: v1
  kind: Secret
  metadata:
    name: ceph-user-secret
  data:
    key: yoursupersecretbase64keygoeshere (1)
  type:
    kubernetes.io/rbd
...
1 Place your Ceph user key here in base64 format.