This is a cache of https://docs.openshift.com/container-platform/4.10/virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.html. It is a snapshot of the page at 2024-11-27T17:57:59.956+0000.
Configuring local storage for virtual machines - Virtual machines | Virtualization | OpenShift Container Platform 4.10
×

About the hostpath provisioner

When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP is a local storage provisioner designed for OpenShift Virtualization that is created by the Hostpath Provisioner Operator. To use the HPP, you must create an HPP custom resource (CR).

In OpenShift Virtualization 4.10, the HPP Operator configures the Kubernetes CSI driver. The Operator also recognizes the existing (legacy) format of the HPP CR.

The legacy HPP and the Container Storage Interface (CSI) driver are supported in parallel for a number of releases. However, at some point, the legacy HPP will no longer be supported. If you use the HPP, plan to create a storage class for the CSI driver as part of your migration strategy.

If you upgrade to OpenShift Virtualization version 4.10 on an existing cluster, the HPP Operator is upgraded and the system performs the following actions:

  • The CSI driver is installed.

  • The CSI driver is configured with the contents of your legacy HPP CR.

If you install OpenShift Virtualization version 4.10 on a new cluster, you must perform the following actions:

  • Create an HPP CR with a basic storage pool.

  • Create a storage class for the CSI driver.

Optional: You can create a storage pool with a PVC template for multiple HPP volumes.

Creating a hostpath provisioner with a basic storage pool

You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a storagePools stanza. The storage pool specifies the name and path used by the CSI driver.

Prerequisites
  • The directories specified in spec.storagePools.path must have read/write access.

  • The storage pools must not be in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.

Procedure
  1. Create an hpp_cr.yaml file with a storagePools stanza as in the following example:

    apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
    kind: HostPathProvisioner
    metadata:
      name: hostpath-provisioner
    spec:
      imagePullPolicy: IfNotPresent
      storagePools: (1)
      - name: any_name
        path: "/var/myvolumes" (2)
    workload:
      nodeSelector:
        kubernetes.io/os: linux
    1 The storagePools stanza is an array to which you can add multiple entries.
    2 Specify the storage pool directories under this node path.
  2. Save the file and exit.

  3. Create the HPP by running the following command:

    $ oc create -f hpp_cr.yaml

About creating storage classes

When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object’s parameters after you create it.

In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the storagePools stanza.

Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.

To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer, the binding and provisioning of the PV is delayed until a pod is created using the PVC.

Creating a storage class for the CSI driver with the storagePools stanza

You create a storage class custom resource (CR) for the hostpath provisioner (HPP) CSI driver.

Prerequisites
  • You must have OpenShift Virtualization 4.10 or later.

Procedure
  1. Create a storageclass_csi.yaml file to define the storage class:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: hostpath-csi (1)
    provisioner: kubevirt.io.hostpath-provisioner
    reclaimPolicy: Delete (2)
    volumeBindingMode: WaitForFirstConsumer (3)
    parameters:
      storagePool: my-storage-pool (4)
    1 Assign any meaningful name to the storage class. In this example, csi is used to specify that the class is using the CSI provisioner instead of the legacy provisioner. Choosing descriptive names for storage classes, based on legacy or CSI driver provisioning, eases implementation of your migration strategy.
    2 The two possible reclaimPolicy values are Delete and Retain. If you do not specify a value, the default value is Delete.
    3 The volumeBindingMode parameter determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements.
    4 Specify the name of the storage pool defined in the HPP CR.
  2. Save the file and exit.

  3. Create the StorageClass object by running the following command:

    $ oc create -f storageclass_csi.yaml

Creating a storage class for the legacy hostpath provisioner

You create a storage class for the legacy hostpath provisioner (HPP) by creating a StorageClass object without the storagePool parameter.

Procedure
  1. Create a storageclass.yaml file to define the storage class:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: hostpath-provisioner
    provisioner: kubevirt.io/hostpath-provisioner
    reclaimPolicy: Delete (1)
    volumeBindingMode: WaitForFirstConsumer (2)
    1 The two possible reclaimPolicy values are Delete and Retain. If you do not specify a value, the storage class defaults to Delete.
    2 The volumeBindingMode value determines when dynamic provisioning and volume binding occur. Specify the WaitForFirstConsumer value to delay the binding and provisioning of a persistent volume until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements.
  2. Save the file and exit.

  3. Create the StorageClass object by running the following command:

    $ oc create -f storageclass.yaml
Additional resources

About storage pools created with PVC templates

If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR).

A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation.

The PVC template is based on the spec stanza of the PersistentVolumeClaim object:

Example PersistentVolumeClaim object
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: iso-pvc
spec:
  volumeMode: Block (1)
  storageClassName: my-storage-class
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
1 This value is only required for block volume mode PVs.

You define a storage pool using a pvcTemplate specification in the HPP CR. The Operator creates a PVC from the pvcTemplate specification for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes.

You can combine basic storage pools with storage pools created from PVC templates.

Creating a storage pool with a PVC template

You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR).

Prerequisites
  • The directories specified in spec.storagePools.path must have read/write access.

  • The storage pools must not be in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.

Procedure
  1. Create an hpp_pvc_template_pool.yaml file for the HPP CR that specifies a persistent volume (PVC) template in the storagePools stanza according to the following example:

    apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
    kind: HostPathProvisioner
    metadata:
      name: hostpath-provisioner
    spec:
      imagePullPolicy: IfNotPresent
      storagePools: (1)
      - name: my-storage-pool
        path: "/var/myvolumes" (2)
        pvcTemplate:
          volumeMode: Block (3)
          storageClassName: my-storage-class (4)
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi (5)
      workload:
        nodeSelector:
          kubernetes.io/os: linux
    1 The storagePools stanza is an array that can contain both basic and PVC template storage pools.
    2 Specify the storage pool directories under this node path.
    3 Optional: The volumeMode parameter can be either Block or Filesystem as long as it matches the provisioned volume format. If no value is specified, the default is Filesystem. If the volumeMode is Block, the mounting pod creates an XFS file system on the block volume before mounting it.
    4 If the storageClassName parameter is omitted, the default storage class is used to create PVCs. If you omit storageClassName, ensure that the HPP storage class is not the default storage class.
    5 You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request.
  2. Save the file and exit.

  3. Create the HPP with a storage pool by running the following command:

    $ oc create -f hpp_pvc_template_pool.yaml
Additional resources