$ dd if=/dev/zero of=<loop10> bs=100M count=20
You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses DataVolumes to automate the import of data and the creation of an underlying PersistentVolumeClaim (PVC).
When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded. The resizing procedure varies based on the operating system that is installed on the virtual machine. Refer to the operating system documentation for details. |
If you require scratch space according to the CDI supported operations matrix, you must first define a StorageClass or prepare CDI scratch space for this operation to complete successfully.
DataVolume
objects are custom resources that are provided by the Containerized
Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload
operations that are associated with an underlying PersistentVolumeClaim (PVC).
DataVolumes are integrated with KubeVirt, and they prevent a virtual machine
from being started before the PVC has been prepared.
A block PersistentVolume (PV) is a PV that is backed by a raw block device. These volumes do not have a filesystem and can provide performance benefits for virtual machines by reducing overhead.
Raw block volumes are provisioned by specifying volumeMode: Block
in the
PV and PersistentVolumeClaim (PVC) specification.
Create a local block PersistentVolume (PV) on a node by populating a file and
mounting it as a loop device. You can then reference this loop device in a
PV configuration as a Block
volume and use it as a block device for a
virtual machine image.
Log in as root
to the node on which to create the local PV. This procedure
uses node01
for its examples.
Create a file and populate it with null characters so that it can be used as a block device.
The following example creates a file loop10
with a size of 2Gb (20 100Mb blocks):
$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the loop10
file as a loop device.
$ losetup </dev/loop10>d3 <loop10> (1) (2)
1 | File path where the loop device is mounted. |
2 | The file created in the previous step to be mounted as the loop device. |
Create a PersistentVolume
configuration that references the mounted loop device.
kind: PersistentVolume
apiVersion: v1
metadata:
name: <local-block-pv10>
annotations:
spec:
local:
path: </dev/loop10> (1)
capacity:
storage: <2Gi>
volumeMode: Block (2)
storageClassName: local (3)
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <node01> (4)
1 | The path of the loop device on the node. |
2 | Specifies it is a block PV. |
3 | Optional: Set a StorageClass for the PV. If you omit it, the cluster default is used. |
4 | The node on which the block device was mounted. |
Create the block PV.
# oc create -f <local-block-pv10.yaml>(1)
1 | The filename of the PersistentVolume created in the previous step. |
You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses DataVolumes to automate the importing data and the creation of an underlying PersistentVolumeClaim (PVC). You can then reference the DataVolume in a virtual machine configuration.
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally
compressed by using xz
or gz
.
An HTTP
or s3
endpoint where the image is hosted, along with any
authentication credentials needed to access the data source
At least one available block PV.
If your data source requires authentication credentials, edit the
endpoint-secret.yaml
file, and apply the updated configuration to the cluster.
Edit the endpoint-secret.yaml
file with your preferred text editor:
apiVersion: v1
kind: secret
metadata:
name: <endpoint-secret>
labels:
app: containerized-data-importer
type: Opaque
data:
accessKeyId: "" (1)
secretKey: "" (2)
1 | Optional: your key or user name, base64 encoded |
2 | Optional: your secret or password, base64 encoded |
Update the secret by running the following command:
$ oc apply -f endpoint-secret.yaml
Create a DataVolume
configuration that specifies the data source for the image
you want to import and volumeMode: Block
so that an available block PV is used.
apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
name: <import-pv-datavolume> (1)
spec:
storageClassName: local (2)
source:
http:
url: <http://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2> (3)
secretRef: <endpoint-secret> (4)
pvc:
volumeMode: Block (5)
accessModes:
- ReadWriteOnce
resources:
requests:
storage: <2Gi>
1 | The name of the DataVolume. |
2 | Optional: Set the storage class or omit it to accept the cluster default. |
3 | The HTTP source of the image to import. |
4 | Only required if the data source requires authentication. |
5 | Required for importing to a block PV. |
Create the DataVolume to import the virtual machine image by running the following command:
$ oc create -f <import-pv-datavolume.yaml>(1)
1 | The file name of the DataVolume that you created in the previous step. |
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required