This is a cache of https://docs.openshift.com/container-platform/4.11/virt/virtual_machines/importing_vms/virt-importing-virtual-machine-images-datavolumes-block.html. It is a snapshot of the page at 2024-11-22T15:58:02.646+0000.
Importing virtual machine images into block storage with data volumes - Virtual machines | Virtualization | OpenShift Container Platform 4.11
×

You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses data volumes to automate the import of data and the creation of an underlying persistent volume claim (PVC).

When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.

The resizing procedure varies based on the operating system that is installed on the virtual machine. See the operating system documentation for details.

Prerequisites

About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.

About block persistent volumes

A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.

Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification.

Creating a local block persistent volume

Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image.

Procedure
  1. Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples.

  2. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks):

    $ dd if=/dev/zero of=<loop10> bs=100M count=20
  3. Mount the loop10 file as a loop device.

    $ losetup </dev/loop10>d3 <loop10>  (1) (2)
    1 File path where the loop device is mounted.
    2 The file created in the previous step to be mounted as the loop device.
  4. Create a PersistentVolume manifest that references the mounted loop device.

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: <local-block-pv10>
      annotations:
    spec:
      local:
        path: </dev/loop10> (1)
      capacity:
        storage: <2Gi>
      volumeMode: Block (2)
      storageClassName: local (3)
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <node01> (4)
    1 The path of the loop device on the node.
    2 Specifies it is a block PV.
    3 Optional: Set a storage class for the PV. If you omit it, the cluster default is used.
    4 The node on which the block device was mounted.
  5. Create the block PV.

    # oc create -f <local-block-pv10.yaml>(1)
    1 The file name of the persistent volume created in the previous step.

Importing a virtual machine image into block storage by using a data volume

You can import a virtual machine image into block storage by using a data volume. You reference the data volume in a VirtualMachine manifest before you create a virtual machine.

Prerequisites
  • A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.

  • An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source.

Procedure
  1. If your data source requires authentication, create a secret manifest, specifying the data source credentials, and save it as endpoint-secret.yaml:

    apiVersion: v1
    kind: secret
    metadata:
      name: endpoint-secret (1)
      labels:
        app: containerized-data-importer
    type: Opaque
    data:
      accessKeyId: "" (2)
      secretKey:   "" (3)
    1 Specify the name of the secret.
    2 Specify the Base64-encoded key ID or user name.
    3 Specify the Base64-encoded secret key or password.
  2. Apply the secret manifest:

    $ oc apply -f endpoint-secret.yaml
  3. Create a DataVolume manifest, specifying the data source for the virtual machine image and Block for storage.volumeMode.

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: import-pv-datavolume (1)
    spec:
      storageClassName: local (2)
        source:
          http:
            url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" (3)
            secretRef: endpoint-secret (4)
      storage:
        volumeMode: Block (5)
        resources:
          requests:
            storage: 10Gi
    1 Specify the name of the data volume.
    2 Optional: Set the storage class or omit it to accept the cluster default.
    3 Specify the HTTP or HTTPS URL of the image to import.
    4 Specify the secret name if you created a secret for the data source.
    5 The volume mode and access mode are detected automatically for known storage provisioners. Otherwise, specify Block.
  4. Create the data volume to import the virtual machine image:

    $ oc create -f import-pv-datavolume.yaml

You can reference this data volume in a VirtualMachine manifest before you create a virtual machine.

CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content types HTTP HTTPS HTTP basic auth Registry Upload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

CDI now uses the OpenShift Container Platform cluster-wide proxy configuration.

Additional resources