kind: VirtualMachine
spec:
domain:
devices:
interfaces:
- name: default
masquerade: {} (1)
ports:
- port: 80 (2)
networks:
- name: default
pod: {}
You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade
binding mode.
The KubeMacPool component provides a MAC address pool service for virtual machine NICs in designated namespaces. It is not enabled by default. Enable a MAC address pool in a namespace by applying the KubeMacPool label to that namespace. |
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.
Edit the interfaces
spec of your virtual machine configuration file:
kind: VirtualMachine
spec:
domain:
devices:
interfaces:
- name: default
masquerade: {} (1)
ports:
- port: 80 (2)
networks:
- name: default
pod: {}
1 | Connect using masquerade mode. |
2 | Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80 . |
Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. |
Create the virtual machine:
$ oc create -f <vm-name>.yaml
Create a service from a running virtual machine by first creating a Service
object to expose the virtual machine.
The ClusterIP
service type exposes the virtual machine internally, within the cluster. The nodeport
or LoadBalancer
service types expose the virtual machine externally, outside of the cluster.
This procedure presents an example of how to create, connect to, and expose a Service
object of type: ClusterIP
as a virtual machine-backed service.
|
Edit the virtual machine YAML as follows:
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
name: vm-ephemeral
namespace: example-namespace
spec:
running: false
template:
metadata:
labels:
special: key (1)
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- masquerade: {}
name: default
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#!/bin/bash
echo "fedora" | passwd fedora --stdin
1 | Add the label special: key in the spec.template.metadata.labels section. |
Labels on a virtual machine are passed through to the pod. The labels on
the |
Save the virtual machine YAML to apply your changes.
Edit the Service
YAML to configure the settings necessary to create and expose the Service
object:
apiVersion: v1
kind: Service
metadata:
name: vmservice (1)
namespace: example-namespace (2)
spec:
ports:
- port: 27017
protocol: TCP
targetPort: 22 (3)
selector:
special: key (4)
type: ClusterIP (5)
1 | Specify the name of the service you are creating and exposing. |
2 | Specify namespace in the metadata section of the Service YAML that corresponds to the namespace you specify in the virtual machine YAML. |
3 | Add targetPort: 22 , exposing the service on SSH port 22 . |
4 | In the spec section of the Service YAML, add special: key to the selector attribute, which corresponds to the labels you added in the virtual machine YAML configuration file. |
5 | In the spec section of the Service YAML, add type: ClusterIP for a
ClusterIP service. To create and expose other types of services externally, outside of the cluster, such as nodeport and LoadBalancer , replace
type: ClusterIP with type: nodeport or type: LoadBalancer , as appropriate. |
Save the Service
YAML to store the service configuration.
Create the ClusterIP
service:
$ oc create -f <service_name>.yaml
Start the virtual machine. If the virtual machine is already running, restart it.
Query the Service
object to verify it is available and is configured with type ClusterIP
.
Run the oc get service
command, specifying the namespace
that you reference in the virtual machine and Service
YAML files.
$ oc get service -n example-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m
As shown from the output, vmservice
is running.
The TYPE
displays as ClusterIP
, as you specified in the Service
YAML.
Establish a connection to the virtual machine that you want to use to back your service. Connect from an object inside the cluster, such as another virtual machine.
Edit the virtual machine YAML as follows:
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
name: vm-connect
namespace: example-namespace
spec:
running: false
template:
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- masquerade: {}
name: default
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#!/bin/bash
echo "fedora" | passwd fedora --stdin
Run the oc create
command to create a second virtual machine, where file.yaml
is the name of the virtual machine YAML:
$ oc create -f <file.yaml>
Start the virtual machine.
Connect to the virtual machine by running the following virtctl
command:
$ virtctl -n example-namespace console <new-vm-name>
For service type |
Run the ssh
command to authenticate the connection, where 172.30.3.149
is the ClusterIP of the service and fedora
is the user name of the virtual machine:
$ ssh fedora@172.30.3.149 -p 27017
You receive the command prompt of the virtual machine backing the service you want to expose. You now have a service backed by a running virtual machine.