-
PVCs must request a ReadWriteMany access mode.
-
PVCs must request a ReadWriteOnce access mode.
You can import a single Red Hat Virtualization (RHV) virtual machine into your OpenShift Container Platform cluster by using the virtual machine wizard or the CLI.
The following table describes local and shared persistent storage that support VM import.
RHV VM import | |
---|---|
OpenShift Container Storage: RBD block-mode volumes |
Yes |
OpenShift Virtualization hostpath provisioner |
No |
Other multi-node writable storage |
Yes [1] |
Other single-node writable storage |
Yes [2] |
PVCs must request a ReadWriteMany access mode.
PVCs must request a ReadWriteOnce access mode.
Importing a virtual machine into OpenShift Virtualization has the following prerequisites:
You must have admin user privileges.
Storage:
The OpenShift Virtualization local and shared persistent storage classes must support VM import.
If you are using Ceph RBD block-mode volumes, the storage must be large enough to accommodate the virtual disk. If the disk is too large for the available storage, the import process fails and the PV that is used to copy the virtual disk is not released.
Networks:
The source and target networks must either have the same name or be mapped to each other.
The source network interface must be e1000
, rtl8139
, or virtio
.
VM disks:
The disk interface must be sata
, virtio_scsi
, or virtio
.
The disk must not be configured as a direct LUN.
The disk status must not be illegal
or locked
.
The storage type must be image
.
SCSI reservation must be disabled.
ScsiGenericIO
must be disabled.
VM configuration:
If the VM uses GPU resources, the nodes providing the GPUs must be configured.
The VM must not be configured for vGPU resources.
The VM must not have snapshots with disks in an illegal
state.
The VM must not have been created with OpenShift Container Platform and subsequently added to RHV.
The VM must not be configured for USB devices.
The watchdog model must not be diag288
.
You must check the default storage class to ensure that it is NFS.
Cinder, the default storage class, does not support VM import.
You can check the default storage class in the OpenShift Container Platform console. If the default storage class is not NFS, you can change the default storage class so that it is no longer the default and change the NFS storage class so that it is the default.
If more than one default storage class is defined, the VirtualMachineImport CR uses the default storage class that is first in alphabetical order.
Navigate to Storage → Storage Classes.
Check the default storage class in the Storage Classes list.
If the default storage class is not NFS, edit the default storage class so that it is no longer the default:
Click the Options menu of the default storage class and select Edit Storage Class.
In the Details tab, click the Edit button beside Annotations.
Click the Delete button on the right side of the storageclass.kubernetes.io/is-default-class
annotation and then click Save.
Change an existing NFS storage class to be the default:
Click the Options menu of an existing NFS storage class and select Edit Storage Class.
In the Details tab, click the Edit button beside Annotations.
Enter storageclass.kubernetes.io/is-default-class
in the Key field and true
in the Value field and then click Save.
Navigate to Storage → Storage Classes to verify that the NFS storage class is the only default storage class.
You can check the default storage class from the CLI.
If the default storage class is not NFS, you must change the default storage class to NFS and change the existing default storage class so that it is not the default. If more than one default storage class is defined, the VirtualMachineImport CR uses the default storage class that is first in alphabetical order.
Get the storage classes by entering the following command:
$ oc get sc
The default
storage class is displayed in the output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANS
...
standard (default) kubernetes.io/cinder Delete WaitForFirstConsumer true
If you are using AWS, use the following process to change the default
storage class. This process assumes you have two storage classes
defined, gp2
and standard
, and you want to change the default
storage class from gp2
to standard
.
List the storage class:
$ oc get storageclass
NAME TYPE
gp2 (default) kubernetes.io/aws-ebs (1)
standard kubernetes.io/aws-ebs
1 | (default) denotes the default storage class. |
Change the value of the annotation
storageclass.kubernetes.io/is-default-class
to false
for the default
storage class:
$ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
Make another storage class the default by adding or modifying the
annotation as storageclass.kubernetes.io/is-default-class=true
.
$ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
Verify the changes:
$ oc get storageclass
NAME TYPE
gp2 kubernetes.io/aws-ebs
standard (default) kubernetes.io/aws-ebs
You can create a ConfigMap to map the Red Hat Virtualization (RHV) virtual machine operating system to an OpenShift Virtualization template if you want to override the default vm-import-controller
mapping or to add additional mappings.
The default vm-import-controller
ConfigMap contains the following RHV operating systems and their corresponding common OpenShift Virtualization templates.
RHV VM operating system | OpenShift Virtualization template |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In a web browser, identify the REST API name of the RHV VM operating system by navigating to http://<RHV_Manager_FQDN>/ovirt-engine/api/vms/<VM_ID>
. The operating system name appears in the <os>
section of the XML output, as shown in the following example:
...
<os>
...
<type>rhel_8x64</type>
</os>
View a list of the available OpenShift Virtualization templates:
$ oc get templates -n openshift --show-labels | tr ',' '\n' | grep os.template.kubevirt.io | sed -r 's#os.template.kubevirt.io/(.*)=.*#\1#g' | sort -u
fedora31
fedora32
...
rhel8.1
rhel8.2
...
If an OpenShift Virtualization template that matches the RHV VM operating system does not appear in the list of available templates, create a template with the OpenShift Virtualization web console.
Create a ConfigMap to map the RHV VM operating system to the OpenShift Virtualization template:
$ cat <<EOF | oc create -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: os-configmap
namespace: default (1)
data:
guestos2common: |
"Red Hat Enterprise Linux Server": "rhel"
"CentOS Linux": "centos"
"Fedora": "fedora"
"Ubuntu": "ubuntu"
"openSUSE": "opensuse"
osinfo2common: |
"<rhv-operating-system>": "<vm-template>" (2)
EOF
1 | Optional: You can change the value of the namespace parameter. |
2 | Specify the REST API name of the RHV operating system and its corresponding VM template as shown in the following example. |
$ cat <<EOF | oc apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: os-configmap
namespace: default
data:
osinfo2common: |
"other_linux": "fedora31"
EOF
Verify that the custom ConfigMap was created:
$ oc get cm -n default os-configmap -o yaml
Edit the kubevirt-hyperconverged-operator.v2.4.9.yaml
file:
$ oc edit clusterserviceversion -n openshift-cnv kubevirt-hyperconverged-operator.v2.4.9
Update the following parameters of the vm-import-operator
deployment manifest:
...
spec:
containers:
- env:
...
- name: OS_CONFIGMAP_NAME
value: os-configmap (1)
- name: OS_CONFIGMAP_NAMESPACE
value: default (2)
1 | Add value: os-configmap to the name: OS_CONFIGMAP_NAME parameter. |
2 | Optional: You can add this value if you changed the namespace in the ConfigMap. |
Save the kubevirt-hyperconverged-operator.v2.4.9.yaml
file.
Updating the vm-import-operator
deployment updates the vm-import-controller
ConfigMap.
Verify that the template appears in the OpenShift Virtualization web console:
Click Workloads → Virtualization from the side menu.
Click the Virtual Machine Templates tab and find the template in the list.
You can import a single virtual machine with the VM Import wizard.
In the web console, click Workloads → Virtual Machines.
Click Create Virtual Machine and select Import with Wizard.
Select Red Hat Virtualization (RHV) from the Provider list.
Select Connect to New Instance or a saved RHV instance.
If you select Connect to New Instance, fill in the following fields:
API URL: For example, https://<RHV_Manager_FQDN>/ovirt-engine/api
CA certificate: Click Browse to upload the RHV Manager CA certificate or paste the CA certificate into the field.
View the CA certificate by running the following command:
$ openssl s_client -connect <RHV_Manager_FQDN>:443 -showcerts < /dev/null
The CA certificate is the second certificate in the output.
Username: RHV Manager user name, for example, admin@internal
Password: RHV Manager password
If you select a saved RHV instance, the wizard connects to the RHV instance using the saved credentials.
Click Check and Save and wait for the connection to complete.
The connection details are stored in a secret. If you add a provider with an incorrect URL, user name, or password, click Workloads → Secrets and delete the provider secret. |
Select a cluster and a virtual machine.
Click Next.
In the Review screen, review your settings.
Optional: You can select Start virtual machine on creation.
Click Edit to update the following settings:
General → Name: The VM name is limited to 63 characters.
General → Description: Optional description of the VM.
Storage Class: Select NFS or ocs-storagecluster-ceph-rbd.
If you select ocs-storagecluster-ceph-rbd, you must set the Volume Mode of the disk to Block.
Advanced → Volume Mode: Select Block.
Advanced → Volume Mode: Select Block.
Networking → Network: You can select a network from a list of available network attachment definition objects.
Click Import or Review and Import, if you have edited the import settings.
A Successfully created virtual machine message and a list of resources created for the virtual machine are displayed. The virtual machine appears in Workloads → Virtual Machines.
Name | Parameter | Description |
---|---|---|
Template |
Template from which to create the virtual machine. Selecting a template will automatically complete other fields. |
|
Source |
PXE |
Provision virtual machine from PXE menu. Requires a PXE-capable NIC in the cluster. |
URL |
Provision virtual machine from an image available from an HTTP or S3 endpoint. |
|
Container |
Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: |
|
Disk |
Provision virtual machine from a disk. |
|
Operating System |
The primary operating system that is selected for the virtual machine. |
|
Flavor |
small, medium, large, tiny, Custom |
Presets that determine the amount of CPU and memory allocated to the virtual machine. The presets displayed for Flavor are determined by the operating system. |
Memory |
Size in GiB of the memory allocated to the virtual machine. |
|
CPUs |
The amount of CPU allocated to the virtual machine. |
|
Workload Profile |
High Performance |
A virtual machine configuration that is optimized for high-performance workloads. |
Server |
A profile optimized to run server workloads. |
|
Desktop |
A virtual machine configuration for use on a desktop. |
|
Name |
The name can contain lowercase letters ( |
|
Description |
Optional description field. |
|
Start virtual machine on creation |
Select to automatically start the virtual machine upon creation. |
Name | Description |
---|---|
Name |
Name for the Network Interface Card. |
Model |
Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO. |
Network |
List of available NetworkAttachmentDefinition objects. |
Type |
List of available binding methods. For the default Pod network, |
MAC Address |
MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session. |
Name | Description |
---|---|
Source |
Select a blank disk for the virtual machine or choose from the options available: URL, Container, Attach Cloned Disk, or Attach Disk. To select an existing disk and attach it to the virtual machine, choose Attach Cloned Disk or Attach Disk from a list of available PersistentVolumeClaims (PVCs). |
Name |
Name of the disk. The name can contain lowercase letters ( |
Size (GiB) |
Size, in GiB, of the disk. |
Interface |
Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. |
Storage Class |
The |
Advanced → Volume Mode |
Name | Parameter | Description |
---|---|---|
Volume Mode |
Filesystem |
Stores the virtual disk on a filesystem-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use |
|
Access Mode [1] |
Single User (RWO) |
The disk can be mounted as read/write by a single node. |
Shared Access (RWX) |
The disk can be mounted as read/write by many nodes. |
|
Read Only (ROX) |
The disk can be mounted as read-only by many nodes. |
You can change the access mode by using the command line interface.
You can import a Red Hat Virtualization (RHV) virtual machine with the CLI by creating the Secret and VirtualMachineImport Custom Resources (CRs). The Secret CR stores the RHV Manager credentials and CA certificate. The VirtualMachineImport CR defines the parameters of the VM import process.
Optional: You can create a ResourceMapping CR that is separate from the VirtualMachineImport CR. A ResourceMapping CR provides greater flexibility, for example, if you import additional RHV VMs.
The default target storage class must be NFS. Cinder does not support RHV VM import. |
Create the Secret CR by running the following command:
$ cat <<EOF | oc create -f -
apiVersion: v1
kind: Secret
metadata:
name: rhv-credentials
namespace: default (1)
type: Opaque
stringData:
ovirt: |
apiUrl: <api_endpoint> (2)
username: admin@internal
password: (3)
caCert: |
-----BEGIN certificate-----
(4)
-----END certificate-----
EOF
1 | Optional. You can specify a different namespace in all the CRs. |
2 | Specify the API endpoint of the RHV Manager, for example, \"https://www.example.com:8443/ovirt-engine/api" |
3 | Specify the password for admin@internal . |
4 | Specify the RHV Manager CA certificate. You can obtain the CA certificate by running the following command: |
$ openssl s_client -connect :443 -showcerts < /dev/null
Optional: Create a ResourceMapping
CR if you want to separate the resource mapping from the VirtualMachineImport
CR by running the following command:
$ cat <<EOF | kubectl create -f -
apiVersion: v2v.kubevirt.io/v1alpha1
kind: ResourceMapping
metadata:
name: resourcemapping_example
namespace: default
spec:
ovirt:
networkMappings:
- source:
name: <rhv_logical_network>/<vnic_profile> (1)
target:
name: <target_network> (2)
type: pod
storageMappings: (3)
- source:
name: <rhv_storage_domain> (4)
target:
name: <target_storage_class> (5)
volumeMode: <volume_mode> (6)
EOF
1 | Specify the RHV logical network and vNIC profile. |
2 | Specify the OpenShift Virtualization network. |
3 | If storage mappings are specified in both the ResourceMapping and the VirtualMachineImport CRs, the VirtualMachineImport CR takes precedence. |
4 | Specify the RHV storage domain. |
5 | Specify nfs or ocs-storagecluster-ceph-rbd . |
6 | If you specified the ocs-storagecluster-ceph-rbd storage class, you must specify Block as the volume mode. |
Create the VirtualMachineImport CR by running the following command:
$ cat <<EOF | oc create -f -
apiVersion: v2v.kubevirt.io/v1alpha1
kind: VirtualMachineImport
metadata:
name: vm-import
namespace: default
spec:
providerCredentialsSecret:
name: rhv-credentials
namespace: default
# resourceMapping: (1)
# name: resourcemapping-example
# namespace: default
targetVmName: vm_example (2)
startVm: true
source:
ovirt:
vm:
id: <source_vm_id> (3)
name: <source_vm_name> (4)
cluster:
name: <source_cluster_name> (5)
mappings: (6)
networkMappings:
- source:
name: <source_logical_network>/<vnic_profile> (7)
target:
name: <target_network> (8)
type: pod
storageMappings: (9)
- source:
name: <source_storage_domain> (10)
target:
name: <target_storage_class> (11)
accessMode: <volume_access_mode> (12)
diskMappings:
- source:
id: <source_vm_disk_id> (13)
target:
name: <target_storage_class> (14)
EOF
1 | If you create a ResourceMapping CR, uncomment the resourceMapping section. |
2 | Specify the target VM name. |
3 | Specify the source VM ID, for example, 80554327-0569-496b-bdeb-fcbbf52b827b . You can obtain the VM ID by entering https://www.example.com/ovirt-engine/api/vms/ in a web browser on the Manager machine to list all VMs. Locate the VM you want to import and its corresponding VM ID. You do not need to specify a VM name or cluster name. |
4 | If you specify the source VM name, you must also specify the source cluster. Do not specify the source VM ID. |
5 | If you specify the source cluster, you must also specify the source VM name. Do not specify the source VM ID. |
6 | If you create a ResourceMapping CR, comment out the mappings section. |
7 | Specify the logical network and vNIC profile of the source VM. |
8 | Specify the OpenShift Virtualization network. |
9 | If storage mappings are specified in both the ResourceMapping and the VirtualMachineImport CRs, the VirtualMachineImport CR takes precedence. |
10 | Specify the source storage domain. |
11 | Specify the target storage class. |
12 | Specify ReadWriteOnce , ReadWriteMany , or ReadOnlyMany . If no access mode is specified, {virt} determines the correct volume access mode based on the Host → Migration mode setting of the RHV VM or on the virtual disk access mode:
|
1 | Specify the source VM disk ID, for example, 8181ecc1-5db8-4193-9c92-3ddab3be7b05 . You can obtain the disk ID by entering https://www.example.com/ovirt-engine/api/vms/vm23 in a web browser on the Manager machine and reviewing the VM details. |
2 | Specify the target storage class.
|
You can cancel a virtual machine import in progress by using the web console.
Click Workloads → Virtual Machines.
Click the Options menu of the virtual machine you are importing and select Delete Virtual Machine.
In the Delete Virtual Machine window, click Delete.
The virtual machine is removed from the list of virtual machines.
You can check the VM Import Controller Pod log for errors.
View the VM Import Controller Pod name by running the following command:
$ oc get pods -n <namespace> | grep import (1)
1 | Specify the namespace of your imported virtual machine. |
vm-import-controller-f66f7d-zqkz7 1/1 Running 0 4h49m
View the VM Import Controller Pod log by running the following command:
$ oc logs <vm-import-controller-f66f7d-zqkz7> -f -n <namespace> (1)
1 | Specify the VM Import Controller Pod name and the namespace. |
The following error message might appear:
The following error message is displayed in the VM Import Controller Pod log and the progress bar stops at 10% if the OpenShift Virtualization storage PV is not suitable:
Failed to bind volumes: provisioning failed for PVC
You must use a compatible storage class. The Cinder storage class is not supported.