Guide to GPU Passthrough (Arch-based)Table of ContentsIntroductionBefore we start...KVM/VFIOWhat you need:My specs:Things to keep in mind:Getting Started [Hardware]Basic Hardware TopologyGetting Started [Software]Update SystemCheck IOMMU GroupingACS Kernel Patch (simplest method)Grub ConfigurationConfirm IOMMU GroupingIsolating PCI Devices(Optional) Fixing audio kernel driverPreparing QEMUInstallationAdding your user to libvirt groupEnabling UEFI Support (VERY IMPORTANT!)Installing Windows 10Deploying Virtual MachineConfiguring Virtual MachineInstalling the OSOS Configuration (Post-Install)XML ConfigurationFinal ConfigurationsMouse supportMonitor outputGame on!
If you're on a mobile device, keep in mind the code blocks will not render properly, resulting in "line wrapping" of code. I'd recommend viewing this on a PC or at least with "desktop mode" enabled on your device's browser.
I'm going to be assuming that you understand the very basics of Linux file manipulation, running scripts, etc. I won't be explaining all the basic things like how to open files in a terminal, how to save a Bash script and make it executable or how to run it. I'm going to tell you what you need, explain why and give you the code. It's up to you to plug it in and do your thing. If that sort of thing is outside your scope, you're gonna have a bad time with this.
VFIO, or Virtual Function I/O, is a technology in the Linux kernel which exposes hardware devices inside the userspace in a secure IOMMU protected environment. So in the simplest of terms, VFIO allows you to pass your physical hardware directly to a virtual machine, rather than forcing your VM to emulate a particular type of hardware. Done properly, this yields near bare-metal performance for most applications (particularly gaming). We're talking upwards of 95-98% of spec with the right configuration.
I'll be writing this guide assuming you're running either Arch or Manjaro. It is entirely possible to make this work on any other distro but Arch is what I'm most comfortable with. Just replace the distro specific commands with the matching commands for your build (ie, pacman vs apt).
At this point, I went ahead and installed my second GPU. Might be worth noting that my primary monitor has multiple HDMI and DP ports to work with.
There are quite a few ways you can do this. You can set things up like I have demonstrated here, you can use a dummy plug on your guest GPU and just set up something like Looking Glass to read the frame buffer from memory, etc, etc. Having experimented a bit, it's just easier for me to do things this way. YMMV.
Note: From this point forward, I'm using my host GPU to give me output on monitor #1 and #2. The guest GPU is connected to monitor #1 but I have the source set to HDMI port #1 for the majority of this article. This will make more sense as we go along.
sudo pacman -Syu
This is perhaps the most important step. You absolutely have to be certain that your IOMMU grouping is perfect. If it's not, you will accomplish nothing and you VM will likely refuse to accept any passthrough devices.
Here is a simple Bash script that will give you all the valuable IOMMU information you seek.
x
for d in /sys/kernel/iommu_groups/*/devices/*; do
n=${d#*/iommu_groups/*}; n=${n%%/*}
printf 'IOMMU Group %s ' "$n"
lspci -nns "${d##*/}"
done
The output of this script should look similar to this.
xxxxxxxxxx
[john@atlas src]$ ./iommu.sh
IOMMU Group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 10 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 59)
IOMMU Group 10 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU Group 11 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1460]
IOMMU Group 11 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1461]
IOMMU Group 11 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1462]
IOMMU Group 11 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1463]
IOMMU Group 11 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1464]
IOMMU Group 11 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1465]
IOMMU Group 11 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1466]
IOMMU Group 11 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 [1022:1467]
IOMMU Group 12 03:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset USB 3.1 XHCI Controller [1022:43d5] (rev 01)
IOMMU Group 13 03:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset SATA Controller [1022:43c8] (rev 01)
IOMMU Group 14 03:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Bridge [1022:43c6] (rev 01)
IOMMU Group 15 20:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
IOMMU Group 16 20:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
IOMMU Group 17 20:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
IOMMU Group 18 22:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
IOMMU Group 19 25:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU116 [GeForce GTX 1660 SUPER] [10de:21c4] (rev a1)
IOMMU Group 1 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 20 25:00.1 Audio device [0403]: NVIDIA Corporation TU116 High Definition Audio Controller [10de:1aeb] (rev a1)
IOMMU Group 21 25:00.2 USB controller [0c03]: NVIDIA Corporation Device [10de:1aec] (rev a1)
IOMMU Group 22 25:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU116 [GeForce GTX 1650 SUPER] [10de:1aed] (rev a1)
IOMMU Group 23 26:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK208B [GeForce GT 710] [10de:128b] (rev a1)
IOMMU Group 24 26:00.1 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
IOMMU Group 25 27:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:145a]
IOMMU Group 26 27:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]
IOMMU Group 27 27:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Zeppelin USB 3.0 Host controller [1022:145f]
IOMMU Group 28 28:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function [1022:1455]
IOMMU Group 29 28:00.2 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 2 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 30 28:00.3 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller [1022:1457]
IOMMU Group 3 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 4 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 5 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 6 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 7 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 8 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 9 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
You'll more than likely have fewer groups than this and will need to patch your kernel for better IOMMU support. We'll get to that in a moment. For right now, you just need to take note of the hardware IDs for your intended passthrough GPU and its associated audio device. For example, since I'm planning on passing my 1660S, I would take note of the IDs 10de:21c4
and 10de:1aeb
as seen above.
Notice that they belong to different groups? If yours belong to the same group, you need to patch your kernel.
mkdir build && cd build
git clone https://aur.archlinux.org/linux-vfio.git
makepkg -si
This should be pretty straightforward, albeit rather time consuming. Grab a beer and relax, it's gonna take a minute. Once the patch has successfully compiled for your kernel, you're good to go for now.
Assuming you're using Grub, go ahead and edit your Grub config (/etc/default/grub
) by adding (not replacing!) the following values to GRUB_CMDLINE_LINUX_DEFAULT
xxxxxxxxxx
amd_iommu=on iommu=pt isolcpus=2-7 nohz_full=2-7 rcu_nocbs=2-7 transparent_hugepage=never rd.driver.pre=vfio-pci vfio pci.ids=10de:21c4,10de:1aeb pcie_acs_override=downstream,multifunction
If you're on Intel hardware, replace amd_iommu
with intel_iommu
and you're good to go.
Notice the pci.ids
parameter? Remember our IOMMU script? The two IDs found here reflect the two IDs I told you to take note of earlier. Be sure to put your PCI IDs here.
isolcpus
- allow kernel to isolate CPU cores from host to use explicitly for VMrcu_nocbs
- allow the user to move all RCU offload threads to housekeeping corestransparent_hugepage=never
- disables transparent huge pages, allowing for faster memory access for the VMpcie_acs_override
- enables our kernel patch from earlierOnce you've got all these options set, go ahead and commit the changes.
sudo grub-mkconfig -o /boot/grub/grub.cfg
Reboot.
Assuming you were able to reboot with no issues, now would be a good time to run the IOMMU script again and verify that your grouping is 100% solid. You should now see your GPUs/audio devices/etc all residing in separate groups, similar to my results above. If not, confirm that your patch was built properly and that you have remembered to pass pcie_acs_override
in your Grub config.
This is where things can get a little tricky. You have to tell your host OS to leave your intended passthrough device alone and to only load up the vfio-pci
kernel drivers for it. What you want to see when running lspci -nnk
is something like this...
xxxxxxxxxx
25:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU116 [GeForce GTX 1660 SUPER] [10de:21c4] (rev a1)
Subsystem: ASUSTeK Computer Inc. TU116 [GeForce GTX 1660 SUPER] [1043:8756]
Kernel driver in use: vfio-pci
Kernel modules: nouveau
25:00.1 Audio device [0403]: NVIDIA Corporation TU116 High Definition Audio Controller [10de:1aeb] (rev a1)
Subsystem: ASUSTeK Computer Inc. TU116 High Definition Audio Controller [1043:8756]
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
Note the Kernel driver in use
output. What you'll likely see here is either the Nvidia proprietary drivers or the nouveau drivers loaded up on the device. We don't want that here. The way I fixed this was creating /etc/modprobe.d/vfio.conf
and adding the following to the file. Be sure to use your own device IDs here.
xxxxxxxxxx
softdep nouveau pre: vfio-pci
options vfio-pci ids=10de:21c4,10de:1aeb
Then, I modified /etc/mkinitcpio.conf
to contain the following.
MODULES="vfio vfio_iommu_type1 vfio_pci vfio_virqfd"
Once these files were modified and saved, I ran sudo mkinitcpio -g /boot/linux-custom.img
to regenerate initramfs
and rebooted the system.
Assuming your passthrough GPU VGA device is now showing vfio-pci
as the kernel driver, you may still see something other than vfio-pci
in use on your planned GPU passthrough audio device. If that's the case for you, here's how I managed to fix it. Simple Bash script to the rescue.
xxxxxxxxxx
############################################################
# This is the GPU VGA device on my system
# 25:00.0 / 1002 6810 (GPU Video)
#
# This is the GPU audio device on my system
# 25:00.1 / 1002 aab0 (GPU Audio)
############################################################
# You should be able to work out your own params via lscpi
# and make the changes here by looking at what I've used.
############################################################
echo "Replacing current audio driver with VFIO driver..."
# We are kicking the old driver out and forcing it to
# use the proper vfio driver on the fly.
sudo echo "10de 1aeb" > /sys/bus/pci/drivers/vfio-pci/new_id
sudo echo "0000:25:00.1" > /sys/bus/pci/devices/0000:25:00.1/driver/unbind
sudo echo "0000:25:00.1" > /sys/bus/pci/drivers/vfio-pci/bind
sudo echo "10de 1aeb" > /sys/bus/pci/drivers/vfio-pci/remove_id
echo "All done!"
After running this, just check again and you should see it using the correct vfio-pci
driver. You should only have to run this script once, as the changes persist through reboots.
Go ahead and use pacaur
to grab everything we need.
pacaur install qemu libvirt edk2-ovmf virt-manager
sudo gpasswd -a yourusername libvirt
or
sudo usermod -a -G libvirt yourusername
This is critical for GPU passthrough to work properly. It will NOT work in BIOS mode, period. You must install your Windows 10 VM using UEFI mode due to the way Nvidia handles VMs. If Nvidia notices we're in a VM, the passthrough device will fail to function. We can only hide the fact that we're in a VM when we use UEFI so it's imperative that we get this right.
On Arch, we store our OVMF files at /usr/share/ovmf/x64
. Be sure to ls
this directory and you should see a few files in *.fd
format. If not, confirm that you have the edk2-ovmf
package installed.
Open /etc/libvirtd/qemu.conf
and add the following. This will add UEFI firmware support to QEMU.
xxxxxxxxxx
nvram = [
"/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd"
]
Once you have that added, restart the libvirtd
service.
sudo systemctl restart libvirtd && sudo systemctl status libvirtd
Make sure you check that the status is good to go. If you see any errors regarding nvram, go back and check your syntax in qemu.conf
and be sure it matches my syntax above.
Open virt-manager
Click New Virtual Machine
Select Local install media (ISO image or CDROM)
Browse to your Windows 10 ISO.
Allocate CPU and Memory.
Ensure Enable storage for this virtual machine and Create a disk image for the virtual machine are checked.
DO NOT CLICK FINISH.
Now we'll get the VM configured the way we want for GPU passthrough. Assuming you're wanting to install directly to an SSD, begin at step 1. Otherwise, begin at step 2.
Pass SSD device directly to VM
Add VirtIO driver ISO
virtio-win-0.1.171.iso
you just downloaded.Networking via VirtIO
Boot Options
Chipset/Firmware
CPU Configuration
Select CPUs on the left pane
Current allocation: Set this to the # of cores you told the wizard. In my case, it's 12.
Ensure Copy host CPU configuration is not selected.
Ensure Enable available CPU security flaw mitigations is not selected.
Set Model to host-passthrough (you'll have to type it in directly)
Under topology, assuming you have 12 "cores" allocated, set...
Sockets to 1
Cores to 6
Threads to 2
6*2=12
Change the cores/threads to equal the amount of allocated "cores".
Example, if it's 8 cores then Cores = 4 and Threads = 2
4*2=8
I may be off-base here so correct me if I'm wrong, but so far the performance
has been nothing short of stellar with these settings.
Add PCI Passthrough Devices
[OPTIONAL] Add USB game controllers
Begin Installation
Click the Begin Installation button at the top left of the window. Give it a minute to complete.
The install process is exactly the same as installing on bare metal. The only catch here is that we're (assuming you are, if not just do step 3.1) installing the OS to a passed through SSD. When you get to the screen where you would normally select your install partitions, you'll have nothing listed. To get the drive to show up...
Click Load Driver
Browse to the VirtIO ISO file, typically mounted to E:\
Drill down through viostor > w10 > amd64 and let it run from there.
Once complete, you should see your SSD and can continue the install as normal.
Once you have the OS installed, the first thing you need to do is open up File Explorer, browse to the VirtIO ISO (again, likely mounted at E:\) and install the Guest Agent Service.
Once that completes, open up Device manager and begin installing the VirtIO drivers on every device with a yellow triangle. Just right click >Update Driver > Browse computer > Browse button > Select the VirtIO disc. Make sure "include subfolders" is checked and run through the prompts.
Only thing to note here is you should have two display devices. One will allow the install of the Red Hat QXL driver and the other will not. The one that doesn't is your passed through GPU.
Go ahead and get those VirtIO drivers installed and then shutdown the VM.
Open up your VM's XML config with the following command, substituting vmname with the name you gave your VM in QEMU. These steps will allow the Nvidia drivers to install by hiding the fact we're operating inside a VM.
sudo EDITOR=nano virsh edit vmname
Somewhere inside the <features>
tag, add the following parameters.
xxxxxxxxxx
<kvm>
<hidden state='on'/>
</kvm>
Inside the <hyperv>
tags, add the following parameters.
xxxxxxxxxx
<vendor_id state='on' value='whatever'/>
Here's my XML config for reference, excluding the devices section for brevity.
xxxxxxxxxx
<domain type='kvm'>
<name>win10</name>
<uuid>56c1d23e-7b46-4566-8c91-d354e7f831f9</uuid>
<metadata>
<libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://microsoft.com/win/10"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>33554432</memory>
<currentMemory unit='KiB'>33554432</currentMemory>
<vcpu placement='static'>12</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-5.0'>hvm</type>
<loader readonly='yes' type='pflash'>/usr/share/edk2-ovmf/x64/OVMF_CODE.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/win10_VARS.fd</nvram>
<bootmenu enable='yes'/>
</os>
<features>
<acpi/>
<apic/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vendor_id state='on' value='whatever'/>
</hyperv>
<kvm>
<hidden state='on'/>
</kvm>
<vmport state='off'/>
</features>
<cpu mode='host-passthrough' check='none'>
<topology sockets='1' dies='1' cores='6' threads='2'/>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
</domain>
At this point, you're pretty much done. Just load up the VM and install the video drivers from Nvidia's website and reboot. Install all your apps/games/etc like normal.
Go ahead and open your VM's configuration screen and remove the "tablet" device. By doing this, you can click inside the running VM window and your mouse is captured for good. evdev
is another alternative that you can look into.
To release the mouse back to the host, just tap L CTRL + L ALT (Left Control/Left Alt) and your mouse is now free to move on the host.
Now that you've got all needed the drivers installed and your monitor is connected to the GPU that's being passed through, inside the VM, open up Device Manager one more time. Under display adapters, right click on the QXL device and click disable. As soon as you do this, your little window into your VM on the host is no longer gonna work. Change the input source on your monitor to reflect the port/device that the GPU is connected to (in my case HDMI-2). As soon as you do this, boom, you're on your guest and everything should work as intended!
To get back to the host, just change the input back to whatever your host's GPU is using, press L CTRL + L ALT to free the mouse again and you're good to.
Alright, now here's the fun part. Go do some gaming! You should see performance around the 95% mark, maybe a little more or less depending on your exact build/config. I'm running just a few FPS shy of what I was running natively, so nowhere even close enough to be "noticeable" in the sense of being inside a VM. You should be fine.
Enjoy!