This is a cache of https://docs.openshift.com/container-platform/4.8/virt/virtual_machines/importing_vms/virt-importing-vmware-vm.html. It is a snapshot of the page at 2024-11-04T20:54:35.482+0000.
Importing a VMware virtual machine or template - Virtual machines | OpenShift Virtualization | OpenShift Container Platform 4.8
×

You can import a VMware vSphere 6.5, 6.7, or 7.0 VM or VM template into OpenShift Virtualization by using the VM Import wizard. If you import a VM template, OpenShift Virtualization creates a virtual machine based on the template.

Importing a VMware VM is a deprecated feature. Deprecated functionality is still included in OpenShift Virtualization and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within OpenShift Virtualization, refer to the Deprecated and removed features section of the OpenShift Virtualization release notes.

This feature will be replaced by the Migration Toolkit for Virtualization.

OpenShift Virtualization storage feature matrix

The following table describes the OpenShift Virtualization storage types that support VM import.

OpenShift Virtualization storage feature matrix
VMware VM import

OpenShift Container Storage: RBD block-mode volumes

Yes

OpenShift Virtualization hostpath provisioner

Yes

Other multi-node writable storage

Yes [1]

Other single-node writable storage

Yes [2]

  1. PVCs must request a ReadWriteMany access mode.

  2. PVCs must request a ReadWriteOnce access mode.

Preparing a VDDK image

The import process uses the VMware Virtual Disk Development Kit (VDDK) to copy the VMware virtual disk.

You can download the VDDK SDK, create a VDDK image, upload the image to an image registry, and add it to the spec.vddkInitImage field of the HyperConverged custom resource (CR).

You can configure either an internal OpenShift Container Platform image registry or a secure external image registry for the VDDK image. The registry must be accessible to your OpenShift Virtualization environment.

Storing the VDDK image in a public registry might violate the terms of the VMware license.

Configuring an internal image registry

You can configure the internal OpenShift Container Platform image registry on bare metal by updating the Image Registry Operator configuration.

You can access the registry directly, from within the OpenShift Container Platform cluster, or externally, by exposing the registry with a route.

Changing the image registry’s management state

To start the image registry, you must change the Image Registry Operator configuration’s managementState from Removed to Managed.

Procedure
  • Change managementState Image Registry Operator configuration from Removed to Managed. For example:

    $ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'

Configuring registry storage for bare metal and other manual installations

As a cluster administrator, following installation you must configure your registry to use storage.

Prerequisites
  • You have access to the cluster as a user with the cluster-admin role.

  • You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal.

  • You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Container Storage.

    OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required.

  • Must have 100Gi capacity.

Procedure
  1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource.

    When using shared storage, review your security settings to prevent outside access.

  2. Verify that you do not have a registry pod:

    $ oc get pod -n openshift-image-registry -l docker-registry=default
    Example output
    No resourses found in openshift-image-registry namespace

    If you do have a registry pod in your output, you do not need to continue with this procedure.

  3. Check the registry configuration:

    $ oc edit configs.imageregistry.operator.openshift.io
    Example output
    storage:
      pvc:
        claim:

    Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC.

  4. Check the clusteroperator status:

    $ oc get clusteroperator image-registry
    Example output
    NAME             VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    image-registry   4.7                                  True        False         False      6h50m
  5. Ensure that your registry is set to managed to enable building and pushing of images.

    • Run:

      $ oc edit configs.imageregistry/cluster

      Then, change the line

      managementState: Removed

      to

      managementState: Managed

Accessing registry directly from the cluster

You can access the registry from inside the cluster.

Procedure

Access the registry from the cluster by using internal routes:

  1. Access the node by getting the node’s name:

    $ oc get nodes
    $ oc debug nodes/<node_name>
  2. To enable access to tools such as oc and podman on the node, change your root directory to /host:

    sh-4.2# chroot /host
  3. Log in to the container image registry by using your access token:

    sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443
    sh-4.2# podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000

    You should see a message confirming login, such as:

    Login Succeeded!

    You can pass any value for the user name; the token contains all necessary information. Passing a user name that contains colons will result in a login failure.

    Since the Image Registry Operator creates the route, it will likely be similar to default-route-openshift-image-registry.<cluster_name>.

  4. Perform podman pull and podman push operations against your registry:

    You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project.

    In the following examples, use:

    Component Value

    <registry_ip>

    172.30.124.220

    <port>

    5000

    <project>

    openshift

    <image>

    image

    <tag>

    omitted (defaults to latest)

    1. Pull an arbitrary image:

      sh-4.2# podman pull name.io/image
    2. Tag the new image with the form <registry_ip>:<port>/<project>/<image>. The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry:

      sh-4.2# podman tag name.io/image image-registry.openshift-image-registry.svc:5000/openshift/image

      You must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the podman push in the next step will fail. To test, you can create a new project to push the image.

    3. Push the newly tagged image to your registry:

      sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/image

Exposing a secure registry manually

Instead of logging in to the OpenShift Container Platform registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images to an existing project by using the route host.

Prerequisites:
  • The following prerequisites are automatically performed:

    • Deploy the Registry Operator.

    • Deploy the Ingress Operator.

Procedure

You can expose the route by using DefaultRoute parameter in the configs.imageregistry.operator.openshift.io resource or by using custom routes.

To expose the registry using DefaultRoute:

  1. Set DefaultRoute to True:

    $ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
  2. Log in with podman:

    $ HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')
    $ podman login -u kubeadmin -p $(oc whoami -t) --tls-verify=false $HOST (1)
    1 --tls-verify=false is needed if the cluster’s default certificate for routes is untrusted. You can set a custom, trusted certificate as the default certificate with the Ingress Operator.

To expose the registry using custom routes:

  1. Create a secret with your route’s TLS keys:

    $ oc create secret tls public-route-tls \
        -n openshift-image-registry \
        --cert=</path/to/tls.crt> \
        --key=</path/to/tls.key>

    This step is optional. If you do not create a secret, the route uses the default TLS configuration from the Ingress Operator.

  2. On the Registry Operator:

    spec:
      routes:
        - name: public-routes
          hostname: myregistry.mycorp.organization
          secretName: public-route-tls
    ...

    Only set secretName if you are providing a custom TLS configuration for the registry’s route.

Configuring an external image registry

If you use an external image registry for the VDDK image, you can add the external image registry’s certificate authorities to the OpenShift Container Platform cluster.

Optionally, you can create a pull secret from your Docker credentials and add it to your service account.

Adding certificate authorities to the cluster

You can add certificate authorities (CA) to the cluster for use when pushing and pulling images with the following procedure.

Prerequisites
  • You must have cluster administrator privileges.

  • You must have access to the public certificates of the registry, usually a hostname/ca.crt file located in the /etc/docker/certs.d/ directory.

Procedure
  1. Create a ConfigMap in the openshift-config namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure the key in the ConfigMap is the hostname of the registry in the hostname[..port] format:

    $ oc create configmap registry-cas -n openshift-config \
    --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt \
    --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt
  2. Update the cluster image configuration:

    $ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge

Allowing pods to reference images from other secured registries

The .dockercfg $HOME/.docker/config.json file for Docker clients is a Docker credentials file that stores your authentication information if you have previously logged into a secured or insecure registry.

To pull a secured container image that is not from OpenShift Container Platform’s internal registry, you must create a pull secret from your Docker credentials and add it to your service account.

Procedure
  • If you already have a .dockercfg file for the secured registry, you can create a secret from that file by running:

    $ oc create secret generic <pull_secret_name> \
        --from-file=.dockercfg=<path/to/.dockercfg> \
        --type=kubernetes.io/dockercfg
  • Or if you have a $HOME/.docker/config.json file:

    $ oc create secret generic <pull_secret_name> \
        --from-file=.dockerconfigjson=<path/to/.docker/config.json> \
        --type=kubernetes.io/dockerconfigjson
  • If you do not already have a Docker credentials file for the secured registry, you can create a secret by running:

    $ oc create secret docker-registry <pull_secret_name> \
        --docker-server=<registry_server> \
        --docker-username=<user_name> \
        --docker-password=<password> \
        --docker-email=<email>
  • To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is default:

    $ oc secrets link default <pull_secret_name> --for=pull

Creating and using a VDDK image

You can download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. You then add the VDDK image to the spec.vddkInitImage field of the HyperConverged custom resource (CR).

Prerequisites
  • You must have access to an OpenShift Container Platform internal image registry or a secure external registry.

Procedure
  1. Create and navigate to a temporary directory:

    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
  2. In a browser, navigate to VMware code and click SDKs.

  3. Under Compute Virtualization, click Virtual Disk Development Kit (VDDK).

  4. Select the VDDK version that corresponds to your VMware vSphere version, for example, VDDK 7.0 for vSphere 7.0, click Download, and then save the VDDK archive in the temporary directory.

  5. Extract the VDDK archive:

    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
  6. Create a Dockerfile:

    $ cat > Dockerfile <<EOF
    FROM busybox:latest
    COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    RUN mkdir -p /opt
    ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    EOF
  7. Build the image:

    $ podman build . -t <registry_route_or_server_path>/vddk:<tag> (1)
    1 Specify your image registry:
    • For an internal OpenShift Container Platform registry, use the internal registry route, for example, image-registry.openshift-image-registry.svc:5000/openshift/vddk:<tag>.

    • For an external registry, specify the server name, path, and tag, for example, server.example.com:5000/vddk:<tag>.

  8. Push the image to the registry:

    $ podman push <registry_route_or_server_path>/vddk:<tag>
  9. Ensure that the image is accessible to your OpenShift Virtualization environment.

  10. Edit the HyperConverged CR in the openshift-cnv project:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
  11. Add the vddkInitImage parameter to the spec stanza:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      vddkInitImage: <registry_route_or_server_path>/vddk:<tag>

Importing a virtual machine with the VM Import wizard

You can import a single virtual machine with the VM Import wizard.

You can also import a VM template. If you import a VM template, OpenShift Virtualization creates a virtual machine based on the template.

Prerequisites
  • You must have admin user privileges.

  • The VMware Virtual Disk Development Kit (VDDK) image must be in an image registry that is accessible to your OpenShift Virtualization environment.

  • The VDDK image must be added to the spec.vddkInitImage field of the HyperConverged custom resource (CR).

  • The VM must be powered off.

  • Virtual disks must be connected to IDE or SCSI controllers. If virtual disks are connected to a SATA controller, you can change them to IDE controllers and then migrate the VM.

  • The OpenShift Virtualization local and shared persistent storage classes must support VM import.

  • The OpenShift Virtualization storage must be large enough to accommodate the virtual disk.

    If you are using Ceph RBD block-mode volumes, the storage must be large enough to accommodate the virtual disk. If the disk is too large for the available storage, the import process fails and the PV that is used to copy the virtual disk is not released. You will not be able to import another virtual machine or to clean up the storage because there are insufficient resources to support object deletion. To resolve this situation, you must add more object storage devices to the storage back end.

  • The OpenShift Virtualization egress network policy must allow the following traffic:

    Destination Protocol Port

    VMware ESXi hosts

    TCP

    443

    VMware ESXi hosts

    TCP

    902

    VMware vCenter

    TCP

    5840

Procedure
  1. In the web console, click WorkloadsVirtual Machines.

  2. Click Create Virtual Machine and select Import with Wizard.

  3. Select VMware from the Provider list.

  4. Select Connect to New Instance or a saved vCenter instance.

    • If you select Connect to New Instance, enter the vCenter hostname, Username, and Password.

    • If you select a saved vCenter instance, the wizard connects to the vCenter instance using the saved credentials.

  5. Click Check and Save and wait for the connection to complete.

    The connection details are stored in a secret. If you add a provider with an incorrect hostname, user name, or password, click WorkloadsSecrets and delete the provider secret.

  6. Select a virtual machine or a template.

  7. Click Next.

  8. In the Review screen, review your settings.

  9. Click Edit to update the following settings:

    • General:

      • Description

      • Operating System

      • Flavor

      • Memory

      • CPUs

      • Workload Profile

    • Networking:

      • Name

      • Model

      • Network

      • Type

      • MAC Address

    • Storage: Click the Options menu kebab of the VM disk and select Edit to update the following fields:

      • Name

      • Source: For example, Import Disk.

      • Size

      • Interface

      • Storage Class: Select nfs or ocs-storagecluster-ceph-rbd (ceph-rbd).

        If you select ocs-storagecluster-ceph-rbd, you must set the Volume Mode of the disk to Block.

        Other storage classes might work, but they are not officially supported.

      • AdvancedVolume Mode: Select Block.

      • AdvancedAccess Mode

    • AdvancedCloud-init:

      • Form: Enter the Hostname and Authenticated SSH Keys.

      • Custom script: Enter the cloud-init script in the text field.

    • AdvancedVirtual Hardware: You can attach a virtual CD-ROM to the imported virtual machine.

  10. Click Import or Review and Import, if you have edited the import settings.

    A Successfully created virtual machine message and a list of resources created for the virtual machine are displayed. The virtual machine appears in WorkloadsVirtual Machines.

Virtual machine wizard fields

Name Parameter Description

Name

The name can contain lowercase letters (a-z), numbers (0-9), and hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

Description

Optional description field.

Operating System

The operating system that is selected for the virtual machine in the template. You cannot edit this field when creating a virtual machine from a template.

Boot Source

Import via URL (creates PVC)

Import content from an image available from an HTTP or HTTPS endpoint. Example: Obtaining a URL link from the web page with the operating system image.

Clone existing PVC (creates PVC)

Select an existent persistent volume claim available on the cluster and clone it.

Import via Registry (creates PVC)

Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: kubevirt/cirros-registry-disk-demo.

PXE (network boot - adds network interface)

Boot an operating system from a server on the network. Requires a PXE bootable network attachment definition.

Persistent Volume Claim project

Project name that you want to use for cloning the PVC.

Persistent Volume Claim name

PVC name that should apply to this virtual machine template if you are cloning an existing PVC.

Mount this as a CD-ROM boot source

A CD-ROM requires an additional disk for installing the operating system. Select the checkbox to add a disk and customize it later.

Flavor

Tiny, Small, Medium, Large, Custom

Presets the amount of CPU and memory in a virtual machine template with predefined values that are allocated to the virtual machine, depending on the operating system associated with that template.

If you choose a default template, you can override the cpus and memsize values in the template using custom values to create a custom template. Alternatively, you can create a custom template by modifying the cpus and memsize values in the Details tab on the WorkloadsVirtualization page.

Workload Type

If you choose the incorrect Workload Type, there could be performance or resource utilization issues (such as a slow UI).

Desktop

A virtual machine configuration for use on a desktop. Ideal for consumption on a small scale. Recommended for use with the web console.

Server

Balances performance and it is compatible with a wide range of server workloads.

High-Performance

A virtual machine configuration that is optimized for high-performance workloads.

Start this virtual machine after creation.

This checkbox is selected by default and the virtual machine starts running after creation. Clear the checkbox if you do not want the virtual machine to start when it is created.

Cloud-init fields

Name Description

Hostname

Sets a specific hostname for the virtual machine.

Authorized SSH Keys

The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine.

Custom script

Replaces other options with a field in which you paste a custom cloud-init script.

Networking fields

Name Description

Name

Name for the network interface controller.

Model

Indicates the model of the network interface controller. Supported values are e1000e and virtio.

Network

List of available network attachment definitions.

Type

List of available binding methods. For the default pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks. Select SR-IOV if you configured an SR-IOV network device and defined that network in the namespace.

MAC Address

MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically.

Storage fields

Name Selection Description

Source

Blank (creates PVC)

Create an empty disk.

Import via URL (creates PVC)

Import content via URL (HTTP or HTTPS endpoint).

Use an existing PVC

Use a PVC that is already available in the cluster.

Clone existing PVC (creates PVC)

Select an existing PVC available in the cluster and clone it.

Import via Registry (creates PVC)

Import content via container registry.

Container (ephemeral)

Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines.

Name

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size

Size of the disk in GiB.

Type

Type of disk. Example: Disk or CD-ROM

Interface

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

The storage class that is used to create the disk.

Advanced → Volume Mode

Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem.

Advanced → Access Mode

Access mode of the persistent volume. Supported access modes are Single User (RWO), Shared Access (RWX), and Read Only (ROX).

Advanced storage settings

The following advanced storage settings are available for Blank, Import via URL, and Clone existing PVC disks. These parameters are optional. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map.

Name Parameter Description

Volume Mode

Filesystem

Stores the virtual disk on a file system-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Single User (RWO)

The disk can be mounted as read/write by a single node.

Shared Access (RWX)

The disk can be mounted as read/write by many nodes.

This is required for some features, such as live migration of virtual machines between nodes.

Read Only (ROX)

The disk can be mounted as read-only by many nodes.

Updating the NIC name of an imported virtual machine

You must update the NIC name of a virtual machine that you imported from VMware to conform to OpenShift Virtualization naming conventions.

Procedure
  1. Log in to the virtual machine.

  2. Navigate to the /etc/sysconfig/network-scripts directory.

  3. Rename the network configuration file:

    $ mv vmnic0 ifcfg-eth0 (1)
    1 The first network configuration file is named ifcfg-eth0. Additional network configuration files are numbered sequentially, for example, ifcfg-eth1, ifcfg-eth2.
  4. Update the NAME and DEVICE parameters in the network configuration file:

    NAME=eth0
    DEVICE=eth0
  5. Restart the network:

    $ systemctl restart network

Troubleshooting a virtual machine import

Logs

You can check the V2V Conversion pod log for errors.

Procedure
  1. View the V2V Conversion pod name by running the following command:

    $ oc get pods -n <namespace> | grep v2v (1)
    1 Specify the namespace of your imported virtual machine.
    Example output
    kubevirt-v2v-conversion-f66f7d-zqkz7            1/1     Running     0          4h49m
  2. View the V2V Conversion pod log by running the following command:

    $ oc logs <kubevirt-v2v-conversion-f66f7d-zqkz7> -f -n <namespace> (1)
    1 Specify the VM Conversion pod name and the namespace.

Error messages

The following error messages might appear:

  • If the VMware VM is not shut down before import, the imported virtual machine displays the error message, Readiness probe failed in the OpenShift Container Platform console and the V2V Conversion pod log displays the following error message:

    INFO - have error: ('virt-v2v error: internal error: invalid argument: libvirt domain ‘v2v_migration_vm_1’ is running or paused. It must be shut down in order to perform virt-v2v conversion',)"
  • The following error message is displayed in the OpenShift Container Platform console if a non-admin user tries to import a VM:

    Could not load config map vmware-to-kubevirt-os in kube-public namespace
    Restricted Access: configmaps "vmware-to-kubevirt-os" is forbidden: User cannot get resource "configmaps" in API group "" in the namespace "kube-public"

    Only an admin user can import a VM.