kind: VirtualMachine
spec:
domain:
devices:
interfaces:
- name: red
masquerade: {} (1)
ports:
- port: 80 (2)
networks:
- name: red
pod: {}
You can use the default Pod network with OpenShift Virtualization. To do so,
you must use the masquerade
binding method. It is the only recommended
binding method for use with the default Pod network. Do not use
masquerade
mode with non-default networks.
For secondary networks, use the bridge
binding method.
The KubeMacPool component provides a MAC address pool service for virtual machine NICs in designated namespaces. It is not enabled by default. Enable a MAC address pool in a namespace by applying the KubeMacPool label to that namespace. |
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the Pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the Pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.
Edit the interfaces
spec of your virtual machine configuration file:
kind: VirtualMachine
spec:
domain:
devices:
interfaces:
- name: red
masquerade: {} (1)
ports:
- port: 80 (2)
networks:
- name: red
pod: {}
1 | Connect using masquerade mode |
2 | Allow incoming traffic on port 80 |
Create the virtual machine:
$ oc create -f <vm-name>.yaml
If you create a virtual machine from the OpenShift Virtualization web console wizard, select the required binding method from the Networking screen.
Name | Description |
---|---|
Name |
Name for the Network Interface Card. |
Model |
Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO. |
Network |
List of available NetworkAttachmentDefinition objects. |
Type |
List of available binding methods. For the default Pod network, |
MAC Address |
MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session. |
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
name: example-vm
namespace: default
spec:
running: false
template:
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- masquerade: {}
name: default
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#!/bin/bash
echo "fedora" | passwd fedora --stdin
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
labels:
special: vmi-windows
name: vmi-windows
spec:
domain:
clock:
timer:
hpet:
present: false
hyperv: {}
pit:
tickPolicy: delay
rtc:
tickPolicy: catchup
utc: {}
cpu:
cores: 2
devices:
disks:
- disk:
bus: sata
name: pvcdisk
interfaces:
- masquerade: {}
model: e1000
name: default
features:
acpi: {}
apic: {}
hyperv:
relaxed: {}
spinlocks:
spinlocks: 8191
vapic: {}
firmware:
uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223
machine:
type: q35
resources:
requests:
memory: 2Gi
networks:
- name: default
pod: {}
terminationGracePeriodSeconds: 0
volumes:
- name: pvcdisk
persistentVolumeClaim:
claimName: disk-windows
Create a service from a running virtual machine by first creating a Service
object to expose the virtual machine.
The ClusterIP
service type exposes the virtual machine internally, within the cluster. The nodeport
or LoadBalancer
service types expose the virtual machine externally, outside of the cluster.
This procedure presents an example of how to create, connect to, and expose a Service
object of type: ClusterIP
as a virtual machine-backed service.
|
Edit the virtual machine YAML as follows:
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
name: vm-ephemeral
namespace: example-namespace
spec:
running: false
template:
metadata:
labels:
special: key (1)
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- masquerade: {}
name: default
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#!/bin/bash
echo "fedora" | passwd fedora --stdin
1 | Add the label special: key in the spec.template.metadata.labels section. |
Labels on a virtual machine are passed through to the pod. The labels on
the |
Save the virtual machine YAML to apply your changes.
Edit the Service
YAML to configure the settings necessary to create and expose the Service
object:
apiVersion: v1
kind: Service
metadata:
name: vmservice (1)
namespace: example-namespace (2)
spec:
ports:
- port: 27017
protocol: TCP
targetPort: 22 (3)
selector:
special: key (4)
type: ClusterIP (5)
1 | Specify the name of the service you are creating and exposing. |
2 | Specify namespace in the metadata section of the Service YAML that corresponds to the namespace you specify in the virtual machine YAML. |
3 | Add targetPort: 22 , exposing the service on SSH port 22 . |
4 | In the spec section of the Service YAML, add special: key to the selector attribute, which corresponds to the labels you added in the virtual machine YAML configuration file. |
5 | In the spec section of the Service YAML, add type: ClusterIP for a
ClusterIP service. To create and expose other types of services externally, outside of the cluster, such as nodeport and LoadBalancer , replace
type: ClusterIP with type: nodeport or type: LoadBalancer , as appropriate. |
Save the Service
YAML to store the service configuration.
Create the ClusterIP
service:
$ oc create -f <service_name>.yaml
Start the virtual machine. If the virtual machine is already running, restart it.
Query the Service
object to verify it is available and is configured with type ClusterIP
.
Run the oc get service
command, specifying the namespace
that you reference in the virtual machine and Service
YAML files.
$ oc get service -n example-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m
As shown from the output, vmservice
is running.
The TYPE
displays as ClusterIP
, as you specified in the Service
YAML.
Establish a connection to the virtual machine that you want to use to back your service. Connect from an object inside the cluster, such as another virtual machine.
Edit the virtual machine YAML as follows:
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
name: vm-connect
namespace: example-namespace
spec:
running: false
template:
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- masquerade: {}
name: default
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#!/bin/bash
echo "fedora" | passwd fedora --stdin
Run the oc create
command to create a second virtual machine, where file.yaml
is the name of the virtual machine YAML:
$ oc create -f <file.yaml>
Start the virtual machine.
Connect to the virtual machine by running the following virtctl
command:
$ virtctl -n example-namespace console <new-vm-name>
For service type |
Run the ssh
command to authenticate the connection, where 172.30.3.149
is the ClusterIP of the service and fedora
is the user name of the virtual machine:
$ ssh fedora@172.30.3.149 -p 27017
You receive the command prompt of the virtual machine backing the service you want to expose. You now have a service backed by a running virtual machine.