This is a cache of https://docs.openshift.com/container-platform/4.5/virt/virtual_machines/vm_networking/virt-using-the-default-pod-network-with-virt.html. It is a snapshot of the page at 2024-11-21T00:29:54.221+0000.
Using the default Pod network with OpenShift Virtualization - Virtual machines | OpenShift Virtualization | OpenShift Container Platform 4.5
×

You can use the default Pod network with OpenShift Virtualization. To do so, you must use the masquerade binding method. It is the only recommended binding method for use with the default Pod network. Do not use masquerade mode with non-default networks.

For secondary networks, use the bridge binding method.

The KubeMacPool component provides a MAC address pool service for virtual machine NICs in designated namespaces. It is not enabled by default. Enable a MAC address pool in a namespace by applying the KubeMacPool label to that namespace.

Configuring masquerade mode from the command line

You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the Pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the Pod network backend through a Linux bridge.

Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.

Prerequisites
  • The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.

Procedure
  1. Edit the interfaces spec of your virtual machine configuration file:

    kind: VirtualMachine
    spec:
      domain:
        devices:
          interfaces:
            - name: red
              masquerade: {} (1)
              ports:
                - port: 80 (2)
      networks:
      - name: red
        pod: {}
    1 Connect using masquerade mode
    2 Allow incoming traffic on port 80
  2. Create the virtual machine:

    $ oc create -f <vm-name>.yaml

Selecting binding method

If you create a virtual machine from the OpenShift Virtualization web console wizard, select the required binding method from the Networking screen.

Networking fields

Name Description

Name

Name for the Network Interface Card.

Model

Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO.

Network

List of available NetworkAttachmentDefinition objects.

Type

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

MAC Address

MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session.

Virtual machine configuration examples for the default network

Template: virtual machine configuration file

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: example-vm
  namespace: default
spec:
  running: false
  template:
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - masquerade: {}
            name: default
        resources:
          requests:
            memory: 1024M
      networks:
        - name: default
          pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: kubevirt/fedora-cloud-container-disk-demo
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |
              #!/bin/bash
              echo "fedora" | passwd fedora --stdin

Template: Windows virtual machine instance configuration file

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
  labels:
    special: vmi-windows
  name: vmi-windows
spec:
  domain:
    clock:
      timer:
        hpet:
          present: false
        hyperv: {}
        pit:
          tickPolicy: delay
        rtc:
          tickPolicy: catchup
      utc: {}
    cpu:
      cores: 2
    devices:
      disks:
      - disk:
          bus: sata
        name: pvcdisk
      interfaces:
      - masquerade: {}
        model: e1000
        name: default
    features:
      acpi: {}
      apic: {}
      hyperv:
        relaxed: {}
        spinlocks:
          spinlocks: 8191
        vapic: {}
    firmware:
      uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223
    machine:
      type: q35
    resources:
      requests:
        memory: 2Gi
  networks:
  - name: default
    pod: {}
  terminationGracePeriodSeconds: 0
  volumes:
  - name: pvcdisk
    persistentVolumeClaim:
      claimName: disk-windows

Creating a service from a virtual machine

Create a service from a running virtual machine by first creating a Service object to expose the virtual machine.

The ClusterIP service type exposes the virtual machine internally, within the cluster. The nodeport or LoadBalancer service types expose the virtual machine externally, outside of the cluster.

This procedure presents an example of how to create, connect to, and expose a Service object of type: ClusterIP as a virtual machine-backed service.

ClusterIP is the default service type, if the service type is not specified.

Procedure
  1. Edit the virtual machine YAML as follows:

    apiVersion: kubevirt.io/v1alpha3
    kind: VirtualMachine
    metadata:
      name: vm-ephemeral
      namespace: example-namespace
    spec:
      running: false
      template:
        metadata:
          labels:
            special: key (1)
        spec:
          domain:
            devices:
              disks:
                - name: containerdisk
                  disk:
                    bus: virtio
                - name: cloudinitdisk
                  disk:
                    bus: virtio
              interfaces:
              - masquerade: {}
                name: default
            resources:
              requests:
                memory: 1024M
          networks:
            - name: default
              pod: {}
          volumes:
            - name: containerdisk
              containerDisk:
                image: kubevirt/fedora-cloud-container-disk-demo
            - name: cloudinitdisk
              cloudInitNoCloud:
                userData: |
                  #!/bin/bash
                  echo "fedora" | passwd fedora --stdin
    1 Add the label special: key in the spec.template.metadata.labels section.

    Labels on a virtual machine are passed through to the pod. The labels on the VirtualMachine, for example special: key, must match the labels in the Service YAML selector attribute, which you create later in this procedure.

  2. Save the virtual machine YAML to apply your changes.

  3. Edit the Service YAML to configure the settings necessary to create and expose the Service object:

    apiVersion: v1
    kind: Service
    metadata:
      name: vmservice (1)
      namespace: example-namespace (2)
    spec:
      ports:
      - port: 27017
        protocol: TCP
        targetPort: 22 (3)
      selector:
        special: key (4)
      type: ClusterIP (5)
    1 Specify the name of the service you are creating and exposing.
    2 Specify namespace in the metadata section of the Service YAML that corresponds to the namespace you specify in the virtual machine YAML.
    3 Add targetPort: 22, exposing the service on SSH port 22.
    4 In the spec section of the Service YAML, add special: key to the selector attribute, which corresponds to the labels you added in the virtual machine YAML configuration file.
    5 In the spec section of the Service YAML, add type: ClusterIP for a ClusterIP service. To create and expose other types of services externally, outside of the cluster, such as nodeport and LoadBalancer, replace type: ClusterIP with type: nodeport or type: LoadBalancer, as appropriate.
  4. Save the Service YAML to store the service configuration.

  5. Create the ClusterIP service:

    $ oc create -f <service_name>.yaml
  6. Start the virtual machine. If the virtual machine is already running, restart it.

  7. Query the Service object to verify it is available and is configured with type ClusterIP.

    Verification
    • Run the oc get service command, specifying the namespace that you reference in the virtual machine and Service YAML files.

      $ oc get service -n example-namespace
      Example output
      NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
      vmservice   ClusterIP   172.30.3.149   <none>        27017/TCP   2m
      • As shown from the output, vmservice is running.

      • The TYPE displays as ClusterIP, as you specified in the Service YAML.

  8. Establish a connection to the virtual machine that you want to use to back your service. Connect from an object inside the cluster, such as another virtual machine.

    1. Edit the virtual machine YAML as follows:

      apiVersion: kubevirt.io/v1alpha3
      kind: VirtualMachine
      metadata:
        name: vm-connect
        namespace: example-namespace
      spec:
        running: false
        template:
          spec:
            domain:
              devices:
                disks:
                  - name: containerdisk
                    disk:
                      bus: virtio
                  - name: cloudinitdisk
                    disk:
                      bus: virtio
                interfaces:
                - masquerade: {}
                  name: default
              resources:
                requests:
                  memory: 1024M
            networks:
              - name: default
                pod: {}
            volumes:
              - name: containerdisk
                containerDisk:
                  image: kubevirt/fedora-cloud-container-disk-demo
              - name: cloudinitdisk
                cloudInitNoCloud:
                  userData: |
                    #!/bin/bash
                    echo "fedora" | passwd fedora --stdin
    2. Run the oc create command to create a second virtual machine, where file.yaml is the name of the virtual machine YAML:

      $ oc create -f <file.yaml>
    3. Start the virtual machine.

    4. Connect to the virtual machine by running the following virtctl command:

      $ virtctl -n example-namespace console <new-vm-name>

      For service type LoadBalancer, use the vinagre client to connect your virtual machine by using the public IP and port. External ports are dynamically allocated when using service type LoadBalancer.

    5. Run the ssh command to authenticate the connection, where 172.30.3.149 is the ClusterIP of the service and fedora is the user name of the virtual machine:

      $ ssh fedora@172.30.3.149 -p 27017
      Verification
      • You receive the command prompt of the virtual machine backing the service you want to expose. You now have a service backed by a running virtual machine.

Additional resources