Recently I made the decision to make Ubuntu 21.04 Linux my primary desktop operating system. One of the only things holding me back is a lack of support for my capture card. I decided to figure out how to use PCI Passthrough to run it inside a Windows Virtual Machine. While there are a few drawbacks explained at the end of this post you only need one monitor and one graphics card. This blog post is a tutorial which explains exactly how to do this. Please note that this is an advanced Linux topic, while I have made every effort to keep it simple as possible, you will need to have a strong understanding on Linux configurations, be comfortable with the command line, and be prepared to problem solve. This blog post will not baby you but there are other guides online which will. The /r/vfio subreddit is nice. SomeOrdinaryGamers made a nice video as well (https://www.youtube.com/watch?v=BUSrdUoedTo) Copying and pasting configurations without modifying for your specific system might not work and/or put your system into an unusable state requiring a reboot or hard reset. With those warnings out of the way let’s get your computer setup to use your GPU and Capture Card inside of VM 🥳🥳🥳

There is limited driver support for the Elgato 4K60 Pro (SC0710) on Linux

Steven Toth has been working on a reversed engineered driver for the card although it lacks support for newer Linux kernels at this time. The issue is being worked on and there is hope for the future. In the meanwhile this is a work-around if you want to run Linux on the desktop today as long as you are okay using Windows when you need to use your capture card. If you would like to contribute to the driver it is available on GitHub at https://github.com/stoth68000/sc0710 because community made drivers are still in an early state, the easier solution in the meanwhile is to use the device inside a virtual machine. This comes with its own set of challenges, I’ve attempted to make that easy as possible. Perhaps one day that’ll change and you won’t need this blog post.

Enable Hardware Virtualization in UEFI/BIOs and enable IOMMU Grouping

There will be a setting like Intel-VT or similar. On most computers its on by default now. Next enable IOMMU Grouping in GRUB settings.

GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt kvm_amd.npt=1 kvm_amd.avic=1"
or
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"

Then sudo update-grub and a reboot.

Install libvirt and virt-manager

To get started you need to install libvirt and virt-manager. You can find this in the Ubuntu Software Center. Afterwards you need to restart your computer.

Create a Windows 10 Virtual Machine with virt-manager

You will need to install Windows 10 inside a Virtual Machine using virt-manager. There are plenty of articles explaining how to do this.

Configuring VFIO-PCI for the Elgato 4K60 PRO

Due to various limitations with how Linux treats this card you need VFIO-PCI to reserve access before any other drivers. You can do this by creating a file named /etc/modprobe.d/vfio.conf. This in my opinion was the hardest part of the process.

You need two important pieces of information. The PCI Alias and the Hardware ID. To get the hardware ID run lspci -vnn it is important to include the -vnn flag to make sure you get the right information. You will see output of every device. You’ll need to look for a section that includes something like this:

0a:00.0 Multimedia controller [0480]: YUAN High-Tech Development Co., Ltd. Device [12ab:0710]
Subsystem: Device [1cfa:000e]
Flags: fast devsel, IRQ 15, IOMMU group 25
Memory at fcc00000 (32-bit, non-prefetchable) [disabled] [size=1M]
Memory at fcd00000 (32-bit, non-prefetchable) [disabled] [size=64K]
Capabilities: [40] Power Management version 3
Capabilities: [48] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [60] Express Endpoint, MSI 00
Capabilities: [100] Device Serial Number 00-00-00-00-00-00-00-00

Frustratingly it is not labeled as an Elgato card. They’ve made this difficult. But that’s okay. Take a look at line two starting with “Subsystem: Device” the text inside the square brackets is the hardware ID. Write this down as you’ll need it later. Next to get the PCI Alias you can do this by running cat /sys/bus/pci/devices/0000:0a:00.0/modalias the 0a will be replaced with the characters xx:xx.x before the word Multimedia. You will get output that looks like

pci:v000012ABd00000710sv00001CFAsd0000000Ebc04sc80i00

Use those two pieces of data to make a configuration file that looks like this

alias pci:v000012ABd00000710sv00001CFAsd0000000Ebc04sc80i00 vfio-pci
options vfio-pci ids=12ab:0710

Afterwards run update-initramfs -u -k all and reboot your system. This makes sure vfio-pci can always get a lock on the card. Beyond this there is not a kernel driver yet for this capture card. This means that you do not need a start/stop script to switch which kernel driver is binded to the device. This makes the process much easier.

Configuring VFIO for your graphics card

Unlike the capture card you do not need to edit /etc/modprobe.d/vfio.conf to pass your graphics card to a Virtual Machine. However it is important to pass through all devices in the same IOMMU group to the Virtual Machine (more on this later). In rare cases you might want to have vfio-pci attach to your GPU first but I’ve found that it causes problems such as a locked display resolution, etc so only do this if you need to. Do not ignore the warning. It is explained below. If you configure VFIO for your graphics card with modprobe, it is important you do this for all devices in the IOMMU Group. A configuration would look something like this but with the comments removed. And yes I know it is a tedious process. There is not a better way at this time.

alias pci:v000012ABd00000710sv00001CFAsd0000000Ebc04sc80i00 vfio-pci
options vfio-pci ids=12ab:0710

#alias pci:v000010DEd00001E84sv00003842sd00003173bc03sc00i00 vfio-pci
#options vfio-pci ids=3842:3173

#alias pci:v000010DEd000010F8sv00003842sd00003173bc04sc03i00 vfio-pci
#options vfio-pci ids=10de:10f8

#alias pci:v000010DEd00001AD8sv00003842sd00003173bc0Csc03i30 vfio-pci
#options vfio-pci ids=10de:1ad8

#alias pci:v000010DEd00001AD9sv00003842sd00003173bc0Csc80i00 vfio-pci
#options vfio-pci ids=10de:1ad9

Check your devices IOMMU Groups

It is important to look for your device with ls /sys/kernel/iommu_groups/*/devices and get the IDs of all devices in that group. Under normal circumstances it is already isolated. Unless you can isolate it, you have to pass all of the devices in the group to the Virtual Machine. You will have to use the device IDs to figure this out.

Configuring QEMU to use your Capture Card and Single GPU PCI Passthrough

You need your GPU to handle encoding while using the card capture. Your CPU cannot handle the data format and will lag heavily and will not be useful. As bonus you get a near native experience while using Windows while it runs inside a Virtual Machine.

Add your devices to QEMU with virt-manager

You will need to go to Information, Add Hardware, add PCI Host Device, and add your Capture Card, GPU, and any other devices in their IOMMU Groups. Again bad things will happen if you ignore IOMMU grouping.

In practice this means add your capture card and then add your GPU + it’s HDMI Audio, USB Host, etc. (NVIDIA RTX 2070 Super is split into four PCI devices, all have to be attached or your VM will fail to boot and your system will be in an unusable state)

Next you will need to add your keyboard and mouse as USB Devices and add a Keyboard and Mouse as virtio and remove the PS2 ones. You are not using spice with this method so the normal mouse integration is unavailable.

Setting up audio is tricky I found instructions at https://github.com/QaidVoid/Complete-Single-GPU-Passthrough#audio-passthrough I passed through my USB Headset as a USB Host Device instead of passing it to pulseaudio.

Create your hook, start, and stop scripts to attach your GPU to the Virtual Machine

For this to work you have to unbind your GPU from Linux (which means stopping the display manager and any process bound to the GPU. When you shutdown the Virtual Machine you have to restart the display manager and bind the GPU to Linux. It is extremely important that you do not copy and paste these scripts but rather modify them for your system. More changes will be needed if you use AMD GPUs. You need to edit the PCI IDs to match your GPU that you are passing through and edit to stop your display manager if you are not using GDM3. The capture card does not have a Linux driver so you do not need start and stop scripts for it (yet).

Hook script:

This is your hook script. It will be stored at /etc/libvirt/hooks/qemu

#!/bin/bash

GUEST_NAME="$1"
HOOK_NAME="$2"
STATE_NAME="$3"
MISC="${@:4}"

BASEDIR="$(dirname $0)"

HOOKPATH="$BASEDIR/qemu.d/$GUEST_NAME/$HOOK_NAME/$STATE_NAME"
set -e # If a script exits with an error, we should as well.

if [ -f "$HOOKPATH" ]; then
eval \""$HOOKPATH"\" "$@"
elif [ -d "$HOOKPATH" ]; then
while read file; do
  eval \""$file"\" "$@"
done <<< "$(find -L "$HOOKPATH" -maxdepth 1 -type f -executable -print;)"
fi

Start Script:

This is your start script. It will be stored at /etc/libvirt/hooks/qemu.d/win10/prepare/begin/start.sh

#!/bin/bash
set -x

# Stop display manager
systemctl stop gdm3
# rc-service xdm stop

# Unbind EFI Framebuffer
echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind

# Unload NVIDIA kernel modules
modprobe -r nvidia_drm nvidia_modeset nvidia_uvm nvidia

# Unload AMD kernel module
# modprobe -r amdgpu

# Detach GPU devices from host
# Use your GPU and HDMI Audio PCI host device
virsh nodedev-detach pci_0000_04_00_0
virsh nodedev-detach pci_0000_04_00_1
virsh nodedev-detach pci_0000_04_00_2
virsh nodedev-detach pci_0000_04_00_3

# Load vfio module
modprobe vfio-pci

This is your stop script. It will be stored at /etc/libvirt/hooks/qemu.d/win10/release/end/stop.sh

#!/bin/bash
set -x

# Unload vfio module
modprobe -r vfio-pci

# Attach GPU devices to host
# Use your GPU and HDMI Audio PCI host device
virsh nodedev-reattach pci_0000_04_00_0
virsh nodedev-reattach pci_0000_04_00_1
virsh nodedev-reattach pci_0000_04_00_2
virsh nodedev-reattach pci_0000_04_00_3

# Rebind framebuffer to host
echo "efi-framebuffer.0" > /sys/bus/platform/drivers/efi-framebuffer/bind

# Load NVIDIA kernel modules
modprobe nvidia_drm
modprobe nvidia_modeset
modprobe nvidia_uvm
modprobe nvidia

# Load AMD kernel module
# modprobe amdgpu

# Restart Display Manager
systemctl start gdm3
# rc-service xdm start

# Log you out so you can create a new session
pkill -KILL -u catgirl

Because you stopped your display manager you have an inaccessible remote session. The line at the end logs you out and terminates the session’s open processes. If you did not save data it will be lost.

Patch your GPU vBIOs if necessary

Some GPUs do not work out of the box with vfio-pci. I found instructions at https://github.com/QaidVoid/Complete-Single-GPU-Passthrough#vbios-patching on how to patch the vBIOs.

Hide the fact you are running a Virtual Machine from Windows

Video Card Drivers will not run in a Virtual Machine. To get around this you will need to add the following to your Virtual Machine’s XML Configuration:

<features>
  ...
  <hyperv>
    ...
    <vendor_id state='on' value='whatever'/>
    ...
  </hyperv>
   <kvm>
    <hidden state='on'/>
  </kvm>
  ...
</features>

Remove SPICE Devices

Remove Channel Spice, Display Spice, Video QXL, Sound ich* and other unnecessary devices. You don’t need them when using GPU Passthrough and it may cause Windows to detect multiple monitors when you can only see one. Just remove all of it to be on the safe side.

Final configuration

Your Virtual Machine’s final configuration file should look something like this.

<domain type='kvm'>
  <name>win10</name>
  <uuid>6933a954-2ba6-4b7e-89d2-da966efbdbea</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://microsoft.com/win/10"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <vcpu placement='static'>16</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-5.2'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='whatever'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
  </features>
  <cpu mode='host-passthrough' check='partial' migratable='on'>
    <topology sockets='1' dies='1' cores='8' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/win10.qcow2'/>
      <target dev='sda' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/home/catgirl/Downloads/virtio-win-0.1.208.iso'/>
      <target dev='sdb' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0x17'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0x18'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='10' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='10' port='0x19'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
    </controller>
    <controller type='pci' index='11' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='11' port='0x1a'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:77:fe:72'/>
      <source network='default'/>
      <model type='e1000e'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='virtio'>
      <address type='pci' domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
    </input>
    <input type='keyboard' bus='virtio'>
      <address type='pci' domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <sound model='ich9'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
    </sound>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x1b1c'/>
        <product id='0x1b3d'/>
      </source>
      <address type='usb' bus='0' port='4'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x1532'/>
        <product id='0x0043'/>
      </source>
      <address type='usb' bus='0' port='5'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x2'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x04' slot='0x00' function='0x3'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
    </hostdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='2'/>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='3'/>
    </redirdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
</domain>

Start your Virtual Machine

Windows will have to configure itself so it may take a minute or two before you get graphical output the first time. Future boots will take about 30 seconds for your GPU to deattach and then attach to Windows. Once the process is complete you will see a login screen just as if Windows 10 was installed natively. To exit back to Linux you have to shutdown Windows. Certain features of KVM such as snapshots or suspend may not work properly.

You will need to install Elgato’s 4K Capture Utility or OBS to use your Capture Card.

Caveats

Unfortunately after you exit the Virtual Machine there is no way I’m aware of to restore the GDM3 session. The stop script logs out your Linux session so you will have to sign back in. This means any unsaved work could be lost and downloads, etc will be stopped. With Single GPU Passthrough there is no way to access your Linux applications while inside the Virtual Machine. Perhaps in the future we will see a community made Virtual GPU Driver or something. Additionally if your guest has a kernel panic (blue screen of death, BSOD) you will only be able to access Linux by hard reset. I recommend installing an SSH Client on your phone and having a script setup to force-stop the Virtual Machine and manually run your stop script (This is your homework ;) ) or do a safe reboot.