Guide to GPU Passthrough (Arch-based)Table of ContentsIntroductionBefore we start...KVM/VFIOWhat you need:My specs:Things to keep in mind:Getting Started [Hardware]Basic Hardware TopologyGetting Started [Software]Update SystemCheck IOMMU GroupingACS Kernel Patch (simplest method)Grub ConfigurationConfirm IOMMU GroupingIsolating PCI Devices(Optional) Fixing audio kernel driverPreparing QEMUInstallationAdding your user to libvirt groupEnabling UEFI Support (VERY IMPORTANT!)Installing Windows 10Deploying Virtual MachineConfiguring Virtual MachineInstalling the OSOS Configuration (Post-Install)XML ConfigurationFinal ConfigurationsMouse supportMonitor outputGame on!
If you're on a mobile device, keep in mind the code blocks will not render properly, resulting in "line wrapping" of code. I'd recommend viewing this on a PC or at least with "desktop mode" enabled on your device's browser.
I'm going to be assuming that you understand the very basics of Linux file manipulation, running scripts, etc. I won't be explaining all the basic things like how to open files in a terminal, how to save a Bash script and make it executable or how to run it. I'm going to tell you what you need, explain why and give you the code. It's up to you to plug it in and do your thing. If that sort of thing is outside your scope, you're gonna have a bad time with this.
VFIO, or Virtual Function I/O, is a technology in the Linux kernel which exposes hardware devices inside the userspace in a secure IOMMU protected environment. So in the simplest of terms, VFIO allows you to pass your physical hardware directly to a virtual machine, rather than forcing your VM to emulate a particular type of hardware. Done properly, this yields near bare-metal performance for most applications (particularly gaming). We're talking upwards of 95-98% of spec with the right configuration.
I'll be writing this guide assuming you're running either Arch or Manjaro. It is entirely possible to make this work on any other distro but Arch is what I'm most comfortable with. Just replace the distro specific commands with the matching commands for your build (ie, pacman vs apt).
At this point, I went ahead and installed my second GPU. Might be worth noting that my primary monitor has multiple HDMI and DP ports to work with.
There are quite a few ways you can do this. You can set things up like I have demonstrated here, you can use a dummy plug on your guest GPU and just set up something like Looking Glass to read the frame buffer from memory, etc, etc. Having experimented a bit, it's just easier for me to do things this way. YMMV.
Note: From this point forward, I'm using my host GPU to give me output on monitor #1 and #2. The guest GPU is connected to monitor #1 but I have the source set to HDMI port #1 for the majority of this article. This will make more sense as we go along.
sudo pacman -Syu
This is perhaps the most important step. You absolutely have to be certain that your IOMMU grouping is perfect. If it's not, you will accomplish nothing and you VM will likely refuse to accept any passthrough devices.
Here is a simple Bash script that will give you all the valuable IOMMU information you seek.
The output of this script should look similar to this.
You'll more than likely have fewer groups than this and will need to patch your kernel for better IOMMU support. We'll get to that in a moment. For right now, you just need to take note of the hardware IDs for your intended passthrough GPU and its associated audio device. For example, since I'm planning on passing my 1660S, I would take note of the IDs
10de:1aeb as seen above.
Notice that they belong to different groups? If yours belong to the same group, you need to patch your kernel.
mkdir build && cd build
git clone https://aur.archlinux.org/linux-vfio.git
This should be pretty straightforward, albeit rather time consuming. Grab a beer and relax, it's gonna take a minute. Once the patch has successfully compiled for your kernel, you're good to go for now.
Assuming you're using Grub, go ahead and edit your Grub config (
/etc/default/grub) by adding (not replacing!) the following values to
If you're on Intel hardware, replace
intel_iommu and you're good to go.
pci.ids parameter? Remember our IOMMU script? The two IDs found here reflect the two IDs I told you to take note of earlier. Be sure to put your PCI IDs here.
isolcpus- allow kernel to isolate CPU cores from host to use explicitly for VM
rcu_nocbs- allow the user to move all RCU offload threads to housekeeping cores
transparent_hugepage=never- disables transparent huge pages, allowing for faster memory access for the VM
pcie_acs_override- enables our kernel patch from earlier
Once you've got all these options set, go ahead and commit the changes.
sudo grub-mkconfig -o /boot/grub/grub.cfg
Assuming you were able to reboot with no issues, now would be a good time to run the IOMMU script again and verify that your grouping is 100% solid. You should now see your GPUs/audio devices/etc all residing in separate groups, similar to my results above. If not, confirm that your patch was built properly and that you have remembered to pass
pcie_acs_override in your Grub config.
This is where things can get a little tricky. You have to tell your host OS to leave your intended passthrough device alone and to only load up the
vfio-pci kernel drivers for it. What you want to see when running
lspci -nnk is something like this...
Kernel driver in use output. What you'll likely see here is either the Nvidia proprietary drivers or the nouveau drivers loaded up on the device. We don't want that here. The way I fixed this was creating
/etc/modprobe.d/vfio.conf and adding the following to the file. Be sure to use your own device IDs here.
Then, I modified
/etc/mkinitcpio.conf to contain the following.
MODULES="vfio vfio_iommu_type1 vfio_pci vfio_virqfd"
Once these files were modified and saved, I ran
sudo mkinitcpio -g /boot/linux-custom.img to regenerate
initramfs and rebooted the system.
Assuming your passthrough GPU VGA device is now showing
vfio-pci as the kernel driver, you may still see something other than
vfio-pci in use on your planned GPU passthrough audio device. If that's the case for you, here's how I managed to fix it. Simple Bash script to the rescue.
After running this, just check again and you should see it using the correct
vfio-pci driver. You should only have to run this script once, as the changes persist through reboots.
Go ahead and use
pacaur to grab everything we need.
pacaur install qemu libvirt edk2-ovmf virt-manager
sudo gpasswd -a yourusername libvirt
sudo usermod -a -G libvirt yourusername
This is critical for GPU passthrough to work properly. It will NOT work in BIOS mode, period. You must install your Windows 10 VM using UEFI mode due to the way Nvidia handles VMs. If Nvidia notices we're in a VM, the passthrough device will fail to function. We can only hide the fact that we're in a VM when we use UEFI so it's imperative that we get this right.
On Arch, we store our OVMF files at
/usr/share/ovmf/x64. Be sure to
ls this directory and you should see a few files in
*.fd format. If not, confirm that you have the
edk2-ovmf package installed.
/etc/libvirtd/qemu.conf and add the following. This will add UEFI firmware support to QEMU.
Once you have that added, restart the
sudo systemctl restart libvirtd && sudo systemctl status libvirtd
Make sure you check that the status is good to go. If you see any errors regarding nvram, go back and check your syntax in
qemu.conf and be sure it matches my syntax above.
Click New Virtual Machine
Select Local install media (ISO image or CDROM)
Browse to your Windows 10 ISO.
Allocate CPU and Memory.
Ensure Enable storage for this virtual machine and Create a disk image for the virtual machine are checked.
DO NOT CLICK FINISH.
Now we'll get the VM configured the way we want for GPU passthrough. Assuming you're wanting to install directly to an SSD, begin at step 1. Otherwise, begin at step 2.
Pass SSD device directly to VM
Add VirtIO driver ISO
virtio-win-0.1.171.isoyou just downloaded.
Networking via VirtIO
Select CPUs on the left pane
Current allocation: Set this to the # of cores you told the wizard. In my case, it's 12.
Ensure Copy host CPU configuration is not selected.
Ensure Enable available CPU security flaw mitigations is not selected.
Set Model to host-passthrough (you'll have to type it in directly)
Under topology, assuming you have 12 "cores" allocated, set...
Sockets to 1
Cores to 6
Threads to 2
Change the cores/threads to equal the amount of allocated "cores".
Example, if it's 8 cores then Cores = 4 and Threads = 2
I may be off-base here so correct me if I'm wrong, but so far the performance
has been nothing short of stellar with these settings.
Add PCI Passthrough Devices
[OPTIONAL] Add USB game controllers
Click the Begin Installation button at the top left of the window. Give it a minute to complete.
The install process is exactly the same as installing on bare metal. The only catch here is that we're (assuming you are, if not just do step 3.1) installing the OS to a passed through SSD. When you get to the screen where you would normally select your install partitions, you'll have nothing listed. To get the drive to show up...
Click Load Driver
Browse to the VirtIO ISO file, typically mounted to E:\
Drill down through viostor > w10 > amd64 and let it run from there.
Once complete, you should see your SSD and can continue the install as normal.
Once you have the OS installed, the first thing you need to do is open up File Explorer, browse to the VirtIO ISO (again, likely mounted at E:\) and install the Guest Agent Service.
Once that completes, open up Device manager and begin installing the VirtIO drivers on every device with a yellow triangle. Just right click >Update Driver > Browse computer > Browse button > Select the VirtIO disc. Make sure "include subfolders" is checked and run through the prompts.
Only thing to note here is you should have two display devices. One will allow the install of the Red Hat QXL driver and the other will not. The one that doesn't is your passed through GPU.
Go ahead and get those VirtIO drivers installed and then shutdown the VM.
Open up your VM's XML config with the following command, substituting vmname with the name you gave your VM in QEMU. These steps will allow the Nvidia drivers to install by hiding the fact we're operating inside a VM.
sudo EDITOR=nano virsh edit vmname
Somewhere inside the
<features> tag, add the following parameters.
<hyperv> tags, add the following parameters.
Here's my XML config for reference, excluding the devices section for brevity.
At this point, you're pretty much done. Just load up the VM and install the video drivers from Nvidia's website and reboot. Install all your apps/games/etc like normal.
Go ahead and open your VM's configuration screen and remove the "tablet" device. By doing this, you can click inside the running VM window and your mouse is captured for good.
evdev is another alternative that you can look into.
To release the mouse back to the host, just tap L CTRL + L ALT (Left Control/Left Alt) and your mouse is now free to move on the host.
Now that you've got all needed the drivers installed and your monitor is connected to the GPU that's being passed through, inside the VM, open up Device Manager one more time. Under display adapters, right click on the QXL device and click disable. As soon as you do this, your little window into your VM on the host is no longer gonna work. Change the input source on your monitor to reflect the port/device that the GPU is connected to (in my case HDMI-2). As soon as you do this, boom, you're on your guest and everything should work as intended!
To get back to the host, just change the input back to whatever your host's GPU is using, press L CTRL + L ALT to free the mouse again and you're good to.
Alright, now here's the fun part. Go do some gaming! You should see performance around the 95% mark, maybe a little more or less depending on your exact build/config. I'm running just a few FPS shy of what I was running natively, so nowhere even close enough to be "noticeable" in the sense of being inside a VM. You should be fine.