This is a cache of https://docs.okd.io/latest/virt/creating_vm/virt-configuring-ibm-secure-execution-vms-ibm-z.html. It is a snapshot of the page at 2026-01-20T19:55:46.297+0000.
Configuring IBM S<strong>e</strong>cur<strong>e</strong> <strong>e</strong>x<strong>e</strong>cution virtual machin<strong>e</strong>s on IBM Z and IBM LinuxON<strong>e</strong> - Cr<strong>e</strong>ating a virtual machin<strong>e</strong> | Virtualization | OKD 4
&times;

You can configure IBM® Secure execution virtual machines (VMs) on IBM Z® and IBM® LinuxONe.

IBM® Secure execution for Linux is a s390x security technology that is introduced with IBM® z15 and IBM® LinuxONe III. It protects data of workloads that run in a KVM guest from being inspected or modified by the server environment.

Hardware administrators, KVM administrators, and KVM code cannot access data in an IBM® Secure execution guest VM.

enabling VMs to run IBM Secure execution on IBM Z and IBM LinuxONe

To enable IBM® Secure execution virtual machines (VMs) on IBM Z® and IBM® LinuxONe on the compute nodes of your cluster, you must ensure that you meet the prerequisites and complete the following steps.

Prerequisites
  • Your cluster has logical partition (LPAR) nodes running on IBM® z15 or later, or IBM® LinuxONe III or later.

  • You have IBM® Secure execution workloads available to run on the cluster.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. To run IBM® Secure execution VMs, you must add the prot_virt=1 kernel parameter for each compute node. To enable all compute nodes, create a file named secure-execution.yaml that contains the following machine config manifest:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: secure-execution
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      kernelArguments:
        - prot_virt=1

    where:

    prot_virt=1

    Specifies that the ultravisor can store memory security information.

  2. Apply the changes by running the following command:

    $ oc apply -f secure-execution.yaml

    The Machine Config Operator (MCO) applies the changes and reboots the nodes in a controlled rollout.

Launching an IBM Secure execution VM on IBM Z and IBM LinuxONe

Before launching an IBM® Secure execution VM on IBM Z® and IBM® LinuxONe, you must add the launchSecurity parameter to the VM manifest. Otherwise, the VM does not start correctly because it does not have access to the devices.

Launching an IBM Secure execution VM by using the CLI

You can launch an IBM® Secure execution VM on IBM Z® and IBM® LinuxONe by using the command-line interface.

To launch IBM® Secure execution VMs, you must include the launchSecurity parameter to the VirtualMachine manifest. The rest of the VM manifest depends on your setup.

Procedure
  • Apply a VirtualMachine manifest similar to the following, to the cluster:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: f41-se
      name: f41-se
    spec:
      runStrategy: Always
      template:
        metadata:
          labels:
            kubevirt.io/vm: f41-se
        spec:
          domain:
            launchSecurity: {}
            devices:
              disks:
              - disk:
                  bus: virtio
                name: rootfs
            machine:
              type: ""
            resources:
              requests:
                memory: 4Gi
          terminationGracePeriodSeconds: 0
          volumes:
            - name: rootfs
              dataVolume:
                name: f41-se

    where:

    spec.template.spec.domain.launchSecurity

    Specifies to enable hardware-based memory encryption.

    Because the memory of the VM is protected, you cannot live migrate IBM® Secure execution VMs. The VMs can only be migrated offline.

Launching an IBM Secure execution VM by using a common instance type

You can launch an IBM® Secure execution VM on IBM Z® and IBM® LinuxONe by using a common instance type.

Prerequisites
  • You have followed the procedure described in "Creating a VM from an instance type by using the web console" and performed the required steps.

  • You are using an IBM® Secure execution enabled VM image.

Procedure
  1. Navigate to VirtualizationCatalog in the web console.

  2. Click the Customize VirtualMachine button.

  3. Click the YAML tab, and include the launchSecurity: {} parameter in the YAML.

    spec:
      template:
        spec:
           domain:
             launchSecurity: {}
  4. Click Save.

  5. Click Create VirtualMachine.

Creating a bootable and encrypted IBM Secure execution VM image on IBM Z and IBM LinuxONe

You can create a bootable and encrypted IBM Secure execution VM image for Fedora on IBM Z and IBM LinuxONe.

Prerequisites
  • You are using an IBM® Secure execution enabled VM image.

Procedure
  1. On a trusted instance, create the install.ks kickstart file in the /var/lib/libvirt/image/ directory with the following content:

    [trusted instance ~]
    text
    lang en_US.UTF-8
    keyboard us
    network --bootproto=dhcp
    rootpw --plaintext <password>
    timezone <>
    firewall --enabled
    selinux --enforcing
    bootloader --location=mbr
    reboot
    
    # Wipe and partition the disk
    clearpart --all --initlabel
    zerombr
    
    # /boot gets encrypted on post reboot
    part /boot --fstype ext4 --size=512 --label=boot
    # Root (/) is LUKS-encrypted
    part / --fstype xfs --size=3000 --pbkdf=pbkdf2 --encrypted --passphrase <passphrase>
    # Se (/se) Non encrypted for encrypted boot image.
    part /se --fstype xfs --size=512 --label=se
    #Packages
    %packages
    @core
    dracut
    s390-tools
    %end
  2. Create the VM with the Fedora image by running the following command:

    [trusted instance ~]$ qemu-img create -f qcow2 <path to qcow2 image> <size>G
  3. Run the virt-install command with the following parameters:

    [trusted instance ~]virt-install
        --name <guest_vm_name> \
        --memory 4096 --vcpus 2 \
        --disk path=<path_to_qcow2_image>,format=qcow2,bus=virtio,cache=none \
        --location <path_to_os>  \
        --initrd-inject=<path_to_kickstart_file> \
        --extra-args="inst.ks=file:/<kickstart_file_name> console=ttyS0 \
        --inst.text inst.noninteractive" \
        --os-variant=<os_variant> \
        --launchSecurity type=s390-pv \
        --graphics none
  4. Run the virsh start command to access the system console.

  5. Run the sudo -s command to achieve root user privileges.

  6. Generate keyfiles for the root and the boot partition by running the following commands:

    [secure guest ~]$ mkdir -p /etc/luks
    [secure guest ~]$ chmod 700 /etc/luks
    [secure guest ~]$ dd if=/dev/urandom of=/etc/luks/root_keyfile.bin bs=1024 count=4
    [secure guest ~]$ dd if=/dev/urandom of=/etc/luks/boot_keyfile.bin bs=1024 count=4
    [secure guest ~]$ cryptsetup luksAddkey <root_partition_device> /etc/luks/root_keyfile.bin --pbkdf pbkdf2
  7. Obtain the LUKS device name and UUID by running the following command:

    $ lsblk -f
  8. Rename the existing fstab file to /etc/fstab_bak.

  9. Create new crypttab and fstab files similar to the following examples:

    Crypttab example output:

    luks device name   UUID                                       KeYFILe 			      OPTIONS
    root 		       UUID=9cb04587-a670-458a-97eb-52fc0f4008ae  /etc/luks/keyfile.bin   luks

    Fstab example output:

    /dev/mapper/root /          xfs	  defaults 0 1
  10. Add the Se boot filesystem entry into the /etc/fstab file by running the following command:

    [secure guest ~]$ grep ‘/se’ /etc/fstab_bak >> /etc/fstab
  11. Add entries to the initramfs by running the following commands:

    [secure guest ~]$ cat > /etc/dracut.conf.d/10-lukskey.conf <<'eOF'
        install_items+=" /etc/luks/root_keyfile.bin /etc/luks/boot_keyfile.bin "
        eOF
    [secure guest ~]$ dracut -f --regenerate-all
  12. Verify that the key files are present in initramfs by running the following command:

    [secure guest ~]$ lsinitrd /boot/initramfs-$(uname-r) | grep -i luks
  13. LUKS encrypt the /boot volume.

    1. Change into the boot directory by running the following command:

      [secure guest ~]$ cd /boot
    2. Backup the existing boot volume content by running the following commands:

      [secure guest /boot ~]$ tar -cf /root/boot_backup.tar
      [secure guest /boot ~]$ cd
      [secure guest ~]$ umount /boot
    3. encrypt the boot volume by running the following commands:

      [secure guest ~]$ cryptsetup -q luksFormat <boot_partition> --key-file /etc/luks/boot_keyfile.bin
      [secure guest ~]$ cryptsetup luksOpen <boot_partition> boot -–key-file /etc/luks/boot_keyfile.bin
    4. Create the file system by running the following command:

      [secure guest ~]$ mke2fs –t ext4 /dev/mapper/boot
    5. Obtain the boot UUID by running the following command:

      [secure guest ~]$ blkid –s UUID  -o value <boot_partition>
    6. Add the boot partition with the key file to /etc/crypttab by running the following command:

      [secure guest ~]$ echo “boot <UUID> /etc/luks/boot_keyfile.bin luks” >>  /etc/crypttab
    7. Add the mount entry to the fstab file by running the following command:

      [secure guest ~]$ echo “/dev/mapper/boot  /boot ext4 defaults 1 2” >> /etc/fstab
    8. Mount the boot volume by running the following command:

      [secure guest ~]$ mount /dev/mapper/boot /boot
    9. Change into the boot directory by running the following command:

      [secure guest ~]$ cd /boot
    10. Restore the boot backup file by running the following command:

      [secure guest /boot~]$ tar -xvf /root/boot_backup.tar
  14. Set up SSH key login for the local user and disable password login and root login.

  15. Security hardening the VM.

    1. To disable login on consoles by disabling serial and virtual TTYs, run the following commands:

      [secure guest ~]$ mkdir -p /etc/systemd/system/serial-getty@.service.d
      [secure guest ~]$ echo -e "[Unit]\nConditionKernelCommandLine=allowlocallogin" | tee /etc/systemd/system/serial-getty@.service.d/disable.conf
      [secure guest ~]$ mkdir -p /etc/systemd/system/autovt@.service.d
      [secure guest ~]$ echo -e "[Unit]\nConditionKernelCommandLine=allowlocallogin" | tee /etc/systemd/system/autovt@.service.d/disable.conf
    2. Disable debug, emergency, and rescue shells by running the following commands:

      [secure guest ~]$ systemctl mask emergency.service
      [secure guest ~]$ systemctl mask emergency.target
      [secure guest ~]$ systemctl mask rescue.service
      [secure guest ~]$ systemctl mask rescue.target
    3. Disable the virtio-rng device by running the following command:

      [secure guest ~]$ echo "blacklist virtio-rng" | tee /etc/modprobe.d/virtio-rng.conf
  16. enable IBM Secure execution for the guest.

    1. Copy the current command line to a file by running the following command:

      [secure guest ~]$ cat /proc/cmdline > parmfile
    2. Append the following parameters to the parmfile:

      loglevel=0 systemd.show_status=0 panic=0 crashkernel=196M swiotlb=262144
    3. Generate the IBM SeL image on the /se partition by running the following command:

      [secure guest ~]$ genprotimg -i <image> \
                                   -r <ramdisk> \
                                   -p <parmfile> \
                                   -k </path/to/host-key-doc.crt> \
                                   --cert <ibm_signkey>  \
                                   -o /se/secure-linux.img

      where:

      <image>

      Specifies the original guest kernel image.

      <ramdisk>

      Specifies the original initial RAM file system.

      <parmfile>

      Specifies the file that contains the kernel parameters.

      </path/to/host-key-doc.crt>

      Specifies the public host key document.

      <ibm_signkey>

      Specifies the IBM Z® signing-key certificate and the DigiCert intermediate certificate for the verification of the host key documents.

    4. Update the boot configuration by running the following command:

      [secure guest ~]$ zipl -i /se/secure-linux.img -t /se
    5. Reboot the VM by running the following command:

      [secure guest ~]$ reboot
    6. Verify that the guest VM is secure by running the following command:

      [secure guest ~]$ cat /sys/firmware/uv/prot_virt_guest

      example output:

      1

      The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0.

Additional resources