apiServerArguments:
feature-gates:
- PersistentLocalVolumes=true
...
controllerArguments:
feature-gates:
- PersistentLocalVolumes=true
...
OKD can be configured to access local volumes for application data.
Local volumes are persistent volumes (PV) representing locally-mounted file systems. In the future, they may be extended to raw block devices.
Local volumes are different from a hostPath. They have a special annotation that makes any pod that uses the PV to be scheduled on the same node where the local volume is mounted.
In addition, local volume includes a provisioner that automatically creates PVs for locally mounted devices. This provisioner is currently limited and it only scans pre-configured directories. It cannot dynamically provision volumes, which may be implemented in a future release.
The local volume provisioner allows using local storage within OKD and supports:
Volumes
PVs
Local volumes is an alpha feature and may change in a future release of OKD. |
Enable the PersistentLocalVolumes
feature gate on all masters and nodes:
Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and add PersistentLocalVolumes=true
under the apiServerArguments
and controllerArguments
sections:
apiServerArguments:
feature-gates:
- PersistentLocalVolumes=true
...
controllerArguments:
feature-gates:
- PersistentLocalVolumes=true
...
On all nodes, edit or create the node configuration file (/etc/origin/node/node-config.yaml by default) and add PersistentLocalVolumes=true
feature gate under kubeletArguments
:
kubeletArguments:
feature-gates:
- PersistentLocalVolumes=true
All local volumes must be manually mounted before they can be consumed by OKD as PVs.
Mount all volumes into the
/mnt/local-storage/<storage-class-name>/<volume> path. Administrators are required to create the local devices as needed (by using any method such as
a disk partition or an LVM), create suitable file systems on these devices, and mount them using a script or /etc/fstab
entries.
/etc/fstab
entries# device name # mount point # FS # options # extra
/dev/sdb1 /mnt/local-storage/ssd/disk1 ext4 defaults 1 2
/dev/sdb2 /mnt/local-storage/ssd/disk2 ext4 defaults 1 2
/dev/sdb3 /mnt/local-storage/ssd/disk3 ext4 defaults 1 2
/dev/sdc1 /mnt/local-storage/hdd/disk1 ext4 defaults 1 2
/dev/sdc2 /mnt/local-storage/hdd/disk2 ext4 defaults 1 2
Change the labels of mounted filesystems so that all volumes are accessible to processes that run within Docker containers:
---
$ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/
---
OKD depends on an external provisioner to create PVs for local devices and to clean them up when they are not needed (to enable reuse).
|
This external provisioner should be configured using a configmap
to relate directories with StorageClasses. This configuration must be created before the provisioner is deployed.
(Optional) Create a standalone namespace for local volume provisioner and its configuration, for example:
|
apiVersion: v1
kind: configmap
metadata:
name: local-volume-config
data:
"local-ssd": | (1)
{
"hostDir": "/mnt/local-storage/ssd", (2)
"mountDir": "/mnt/local-storage/ssd" (3)
}
"local-hdd": |
{
"hostDir": "/mnt/local-storage/hdd",
"mountDir": "/mnt/local-storage/hdd"
}
1 | Name of the StorageClass. |
2 | Path to the directory on the host. It must be a subdirectory of /mnt/local-storage. |
3 | Path to the directory in the provisioner pod. We recommend using the same directory structure as used on the host. |
With this configuration, the provisioner creates:
One PV with StorageClass local-ssd
for every subdirectory in /mnt/local-storage/ssd.
One PV with StorageClass local-hdd
for every subdirectory in /mnt/local-storage/hdd.
Before starting the provisioner, mount all local devices and create a |
Install the local provisioner from the local-storage-provisioner-template.yaml file.
Create a service account that allows running pods as a root user and use HostPath volumes:
$ oc create serviceaccount local-storage-admin
$ oc adm policy add-scc-to-user privileged -z local-storage-admin
Root privileges are required for the provisioner pod to delete content on local volumes. hostPath is required to access the /mnt/local-storage path on the host. |
Install the template:
$ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml
Instantiate the template by specifying values for configmap
and account
parameters:
$ oc new-app -p configmap=local-volume-config \
-p SERVICE_ACCOUNT=local-storage-admin \
-p NAMESPACE=local-storage local-storage-provisioner
Create the SSD and HDD files:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-ssd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hdd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Add the necessary storage classes:
oc create -f ./storage-class-ssd.yaml
oc create -f ./storage-class-hdd.yaml
See the template for other configurable options. This template creates a DaemonSet that runs a
pod on every node. The pod watches directories specified in the configmap
and
creates PVs for them automatically.
The provisioner runs as root to be able to clean up the directories when a PV is released and all data needs to be removed.