-
PVCs must request a ReadWriteMany access mode.
-
PVCs must request a ReadWriteOnce access mode.
You can import a single Red Hat Virtualization (RHV) virtual machine into OpenShift Virtualization by using the VM Import wizard or the CLI.
Importing a RHV VM is a deprecated feature. Deprecated functionality is still included in OpenShift Virtualization and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Virtualization, refer to the Deprecated and removed features section of the OpenShift Virtualization release notes. |
This feature will be replaced by the Migration Toolkit for Virtualization.
The following table describes the OpenShift Virtualization storage types that support VM import.
RHV VM import | |
---|---|
OpenShift Container Storage: RBD block-mode volumes |
Yes |
OpenShift Virtualization hostpath provisioner |
No |
Other multi-node writable storage |
Yes [1] |
Other single-node writable storage |
Yes [2] |
PVCs must request a ReadWriteMany access mode.
PVCs must request a ReadWriteOnce access mode.
Importing a virtual machine from Red Hat Virtualization (RHV) into OpenShift Virtualization has the following prerequisites:
You must have admin user privileges.
Storage:
The OpenShift Virtualization local and shared persistent storage classes must support VM import.
If you are using Ceph RBD block-mode volumes and the available storage space is too small for the virtual disk, the import process bar stops at 75% for more than 20 minutes and the migration does not succeed. No error message is displayed in the web console. BZ#1910019
Networks:
The RHV and OpenShift Virtualization networks must either have the same name or be mapped to each other.
The RHV VM network interface must be e1000
, rtl8139
, or virtio
.
VM disks:
The disk interface must be sata
, virtio_scsi
, or virtio
.
The disk must not be configured as a direct LUN.
The disk status must not be illegal
or locked
.
The storage type must be image
.
SCSI reservation must be disabled.
ScsiGenericIO
must be disabled.
VM configuration:
If the VM uses GPU resources, the nodes providing the GPUs must be configured.
The VM must not be configured for vGPU resources.
The VM must not have snapshots with disks in an illegal
state.
The VM must not have been created with OpenShift Container Platform and subsequently added to RHV.
The VM must not be configured for USB devices.
The watchdog model must not be diag288
.
You can import a single virtual machine with the VM Import wizard.
In the web console, click Workloads → Virtual Machines.
Click Create Virtual Machine and select Import with Wizard.
Select Red Hat Virtualization (RHV) from the Provider list.
Select Connect to New Instance or a saved RHV instance.
If you select Connect to New Instance, fill in the following fields:
API URL: For example, https://<RHV_Manager_FQDN>/ovirt-engine/api
CA certificate: Click Browse to upload the RHV Manager CA certificate or paste the CA certificate into the field.
View the CA certificate by running the following command:
$ openssl s_client -connect <RHV_Manager_FQDN>:443 -showcerts < /dev/null
The CA certificate is the second certificate in the output.
Username: RHV Manager user name, for example, ocpadmin@internal
Password: RHV Manager password
If you select a saved RHV instance, the wizard connects to the RHV instance using the saved credentials.
Click Check and Save and wait for the connection to complete.
The connection details are stored in a secret. If you add a provider with an incorrect URL, user name, or password, click Workloads → Secrets and delete the provider secret. |
Select a cluster and a virtual machine.
Click Next.
In the Review screen, review your settings.
Optional: You can select Start virtual machine on creation.
Click Edit to update the following settings:
General → Name: The VM name is limited to 63 characters.
General → Description: Optional description of the VM.
Storage Class: Select NFS or ocs-storagecluster-ceph-rbd.
If you select ocs-storagecluster-ceph-rbd, you must set the Volume Mode of the disk to Block.
Advanced → Volume Mode: Select Block.
Advanced → Volume Mode: Select Block.
Networking → Network: You can select a network from a list of available network attachment definition objects.
Click Import or Review and Import, if you have edited the import settings.
A Successfully created virtual machine message and a list of resources created for the virtual machine are displayed. The virtual machine appears in Workloads → Virtual Machines.
Name | Parameter | Description | ||
---|---|---|---|---|
Name |
The name can contain lowercase letters ( |
|||
Description |
Optional description field. |
|||
Operating System |
The operating system that is selected for the virtual machine in the template. You cannot edit this field when creating a virtual machine from a template. |
|||
Boot Source |
Import via URL (creates PVC) |
Import content from an image available from an HTTP or HTTPS endpoint. Example: Obtaining a URL link from the web page with the operating system image. |
||
Clone existing PVC (creates PVC) |
Select an existent persistent volume claim available on the cluster and clone it. |
|||
Import via Registry (creates PVC) |
Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: |
|||
PXE (network boot - adds network interface) |
Boot an operating system from a server on the network. Requires a PXE bootable network attachment definition. |
|||
Persistent Volume Claim project |
Project name that you want to use for cloning the PVC. |
|||
Persistent Volume Claim name |
PVC name that should apply to this virtual machine template if you are cloning an existing PVC. |
|||
Mount this as a CD-ROM boot source |
A CD-ROM requires an additional disk for installing the operating system. Select the checkbox to add a disk and customize it later. |
|||
Flavor |
Tiny, Small, Medium, Large, Custom |
Presets the amount of CPU and memory in a virtual machine template with predefined values that are allocated to the virtual machine, depending on the operating system associated with that template. If you choose a default template, you can override the |
||
Workload Type
|
Desktop |
A virtual machine configuration for use on a desktop. Ideal for consumption on a small scale. Recommended for use with the web console. |
||
Server |
Balances performance and it is compatible with a wide range of server workloads. |
|||
High-Performance |
A virtual machine configuration that is optimized for high-performance workloads. |
|||
Start this virtual machine after creation. |
This checkbox is selected by default and the virtual machine starts running after creation. Clear the checkbox if you do not want the virtual machine to start when it is created. |
Name | Description |
---|---|
Name |
Name for the network interface controller. |
Model |
Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
Network |
List of available network attachment definitions. |
Type |
List of available binding methods. For the default pod network, |
MAC Address |
MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
Name | Selection | Description |
---|---|---|
Source |
Blank (creates PVC) |
Create an empty disk. |
Import via URL (creates PVC) |
Import content via URL (HTTP or HTTPS endpoint). |
|
Use an existing PVC |
Use a PVC that is already available in the cluster. |
|
Clone existing PVC (creates PVC) |
Select an existing PVC available in the cluster and clone it. |
|
Import via Registry (creates PVC) |
Import content via container registry. |
|
Container (ephemeral) |
Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. |
|
Name |
Name of the disk. The name can contain lowercase letters ( |
|
Size |
Size of the disk in GiB. |
|
Type |
Type of disk. Example: Disk or CD-ROM |
|
Interface |
Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. |
|
Storage Class |
The storage class that is used to create the disk. |
|
Advanced → Volume Mode |
Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem. |
Name | Parameter | Description |
---|---|---|
Volume Mode |
Filesystem |
Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use |
|
Access Mode [1] |
Single User (RWO) |
The disk can be mounted as read/write by a single node. |
Shared Access (RWX) |
The disk can be mounted as read/write by many nodes. |
|
Read Only (ROX) |
The disk can be mounted as read-only by many nodes. |
You can change the access mode by using the command line interface.
You can import a virtual machine with the CLI by creating the Secret
and VirtualMachineImport
custom resources (CRs). The Secret
CR stores the RHV Manager credentials and CA certificate. The VirtualMachineImport
CR defines the parameters of the VM import process.
Optional: You can create a ResourceMapping
CR that is separate from the VirtualMachineImport
CR. A ResourceMapping
CR provides greater flexibility, for example, if you import additional RHV VMs.
The default target storage class must be NFS. Cinder does not support RHV VM import. |
Create the Secret
CR by running the following command:
$ cat <<EOF | oc create -f -
apiVersion: v1
kind: Secret
metadata:
name: rhv-credentials
namespace: default (1)
type: Opaque
stringData:
ovirt: |
apiUrl: <api_endpoint> (2)
username: ocpadmin@internal
password: (3)
caCert: |
-----BEGIN CERTIFICATE-----
(4)
-----END CERTIFICATE-----
EOF
1 | Optional. You can specify a different namespace in all the CRs. |
2 | Specify the API endpoint of the RHV Manager, for example, \"https://www.example.com:8443/ovirt-engine/api" |
3 | Specify the password for ocpadmin@internal . |
4 | Specify the RHV Manager CA certificate. You can obtain the CA certificate by running the following command: |
$ openssl s_client -connect :443 -showcerts < /dev/null
Optional: Create a ResourceMapping
CR if you want to separate the resource mapping from the VirtualMachineImport
CR by running the following command:
$ cat <<EOF | kubectl create -f -
apiVersion: v2v.kubevirt.io/v1alpha1
kind: ResourceMapping
metadata:
name: resourcemapping_example
namespace: default
spec:
ovirt:
networkMappings:
- source:
name: <rhv_logical_network>/<vnic_profile> (1)
target:
name: <target_network> (2)
type: pod
storageMappings: (3)
- source:
name: <rhv_storage_domain> (4)
target:
name: <target_storage_class> (5)
volumeMode: <volume_mode> (6)
EOF
1 | Specify the RHV logical network and vNIC profile. |
2 | Specify the OpenShift Virtualization network. |
3 | If storage mappings are specified in both the ResourceMapping and the VirtualMachineImport CRs, the VirtualMachineImport CR takes precedence. |
4 | Specify the RHV storage domain. |
5 | Specify nfs or ocs-storagecluster-ceph-rbd . |
6 | If you specified the ocs-storagecluster-ceph-rbd storage class, you must specify Block as the volume mode. |
Create the VirtualMachineImport
CR by running the following command:
$ cat <<EOF | oc create -f -
apiVersion: v2v.kubevirt.io/v1beta1
kind: VirtualMachineImport
metadata:
name: vm-import
namespace: default
spec:
providerCredentialsSecret:
name: rhv-credentials
namespace: default
# resourceMapping: (1)
# name: resourcemapping-example
# namespace: default
targetVmName: vm_example (2)
startVm: true
source:
ovirt:
vm:
id: <source_vm_id> (3)
name: <source_vm_name> (4)
cluster:
name: <source_cluster_name> (5)
mappings: (6)
networkMappings:
- source:
name: <source_logical_network>/<vnic_profile> (7)
target:
name: <target_network> (8)
type: pod
storageMappings: (9)
- source:
name: <source_storage_domain> (10)
target:
name: <target_storage_class> (11)
accessMode: <volume_access_mode> (12)
diskMappings:
- source:
id: <source_vm_disk_id> (13)
target:
name: <target_storage_class> (14)
EOF
1 | If you create a ResourceMapping CR, uncomment the resourceMapping section. |
2 | Specify the target VM name. |
3 | Specify the source VM ID, for example, 80554327-0569-496b-bdeb-fcbbf52b827b . You can obtain the VM ID by entering https://www.example.com/ovirt-engine/api/vms/ in a web browser on the Manager machine to list all VMs. Locate the VM you want to import and its corresponding VM ID. You do not need to specify a VM name or cluster name. |
4 | If you specify the source VM name, you must also specify the source cluster. Do not specify the source VM ID. |
5 | If you specify the source cluster, you must also specify the source VM name. Do not specify the source VM ID. |
6 | If you create a ResourceMapping CR, comment out the mappings section. |
7 | Specify the logical network and vNIC profile of the source VM. |
8 | Specify the OpenShift Virtualization network. |
9 | If storage mappings are specified in both the ResourceMapping and the VirtualMachineImport CRs, the VirtualMachineImport CR takes precedence. |
10 | Specify the source storage domain. |
11 | Specify the target storage class. |
12 | Specify ReadWriteOnce , ReadWriteMany , or ReadOnlyMany . If no access mode is specified, {virt} determines the correct volume access mode based on the Host → Migration mode setting of the RHV VM or on the virtual disk access mode:
|
13 | Specify the source VM disk ID, for example, 8181ecc1-5db8-4193-9c92-3ddab3be7b05 . You can obtain the disk ID by entering https://www.example.com/ovirt-engine/api/vms/vm23 in a web browser on the Manager machine and reviewing the VM details. |
14 | Specify the target storage class. |
Follow the progress of the virtual machine import to verify that the import was successful:
$ oc get vmimports vm-import -n default
The output indicating a successful import resembles the following example:
...
status:
conditions:
- lastHeartbeatTime: "2020-07-22T08:58:52Z"
lastTransitionTime: "2020-07-22T08:58:52Z"
message: Validation completed successfully
reason: ValidationCompleted
status: "True"
type: Valid
- lastHeartbeatTime: "2020-07-22T08:58:52Z"
lastTransitionTime: "2020-07-22T08:58:52Z"
message: 'VM specifies IO Threads: 1, VM has NUMA tune mode specified: interleave'
reason: MappingRulesVerificationReportedWarnings
status: "True"
type: MappingRulesVerified
- lastHeartbeatTime: "2020-07-22T08:58:56Z"
lastTransitionTime: "2020-07-22T08:58:52Z"
message: Copying virtual machine disks
reason: CopyingDisks
status: "True"
type: Processing
dataVolumes:
- name: fedora32-b870c429-11e0-4630-b3df-21da551a48c0
targetVmName: fedora32
You can create a config map to map the Red Hat Virtualization (RHV) virtual machine operating system to an OpenShift Virtualization template if you want to override the default vm-import-controller
mapping or to add additional mappings.
The default vm-import-controller
config map contains the following RHV operating systems and their corresponding common OpenShift Virtualization templates.
RHV VM operating system | OpenShift Virtualization template |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In a web browser, identify the REST API name of the RHV VM operating system by navigating to http://<RHV_Manager_FQDN>/ovirt-engine/api/vms/<VM_ID>
. The operating system name appears in the <os>
section of the XML output, as shown in the following example:
...
<os>
...
<type>rhel_8x64</type>
</os>
View a list of the available OpenShift Virtualization templates:
$ oc get templates -n openshift --show-labels | tr ',' '\n' | grep os.template.kubevirt.io | sed -r 's#os.template.kubevirt.io/(.*)=.*#\1#g' | sort -u
fedora31
fedora32
...
rhel8.1
rhel8.2
...
If an OpenShift Virtualization template that matches the RHV VM operating system does not appear in the list of available templates, create a template with the OpenShift Virtualization web console.
Create a config map to map the RHV VM operating system to the OpenShift Virtualization template:
$ cat <<EOF | oc create -f -
apiVersion: v1
kind: configmap
metadata:
name: os-configmap
namespace: default (1)
data:
guestos2common: |
"Red Hat Enterprise Linux Server": "rhel"
"CentOS Linux": "centos"
"Fedora": "fedora"
"Ubuntu": "ubuntu"
"openSUSE": "opensuse"
osinfo2common: |
"<rhv-operating-system>": "<vm-template>" (2)
EOF
1 | Optional: You can change the value of the namespace parameter. |
2 | Specify the REST API name of the RHV operating system and its corresponding VM template as shown in the following example. |
$ cat <<EOF | oc apply -f -
apiVersion: v1
kind: configmap
metadata:
name: os-configmap
namespace: default
data:
osinfo2common: |
"other_linux": "fedora31"
EOF
Verify that the custom config map was created:
$ oc get cm -n default os-configmap -o yaml
Patch the vm-import-controller-config
config map to apply the new config map:
$ oc patch configmap vm-import-controller-config -n openshift-cnv --patch '{
"data": {
"osconfigmap.name": "os-configmap",
"osconfigmap.namespace": "default" (1)
}
}'
1 | Update the namespace if you changed it in the config map. |
Verify that the template appears in the OpenShift Virtualization web console:
Click Workloads → Virtualization from the side menu.
Click the Virtual Machine Templates tab and find the template in the list.
You can check the VM Import Controller pod log for errors.
View the VM Import Controller pod name by running the following command:
$ oc get pods -n <namespace> | grep import (1)
1 | Specify the namespace of your imported virtual machine. |
vm-import-controller-f66f7d-zqkz7 1/1 Running 0 4h49m
View the VM Import Controller pod log by running the following command:
$ oc logs <vm-import-controller-f66f7d-zqkz7> -f -n <namespace> (1)
1 | Specify the VM Import Controller pod name and the namespace. |
The following error message might appear:
The following error message is displayed in the VM Import Controller pod log and the progress bar stops at 10% if the OpenShift Virtualization storage PV is not suitable:
Failed to bind volumes: provisioning failed for PVC
You must use a compatible storage class. The Cinder storage class is not supported.
If you are using Ceph RBD block-mode volumes and the available storage space is too small for the virtual disk, the import process bar stops at 75% for more than 20 minutes and the migration does not succeed. No error message is displayed in the web console. BZ#1910019