Guide to GPU Passthrough (Arch-based)

Table of Contents

Guide to GPU Passthrough (Arch-based)Table of ContentsIntroductionBefore we start...KVM/VFIOWhat you need:My specs:Things to keep in mind:Getting Started [Hardware]Basic Hardware TopologyGetting Started [Software]Update SystemCheck IOMMU GroupingACS Kernel Patch (simplest method)Grub ConfigurationConfirm IOMMU GroupingIsolating PCI Devices(Optional) Fixing audio kernel driverPreparing QEMUInstallationAdding your user to libvirt groupEnabling UEFI Support (VERY IMPORTANT!)Installing Windows 10Deploying Virtual MachineConfiguring Virtual MachineInstalling the OSOS Configuration (Post-Install)XML ConfigurationFinal ConfigurationsMouse supportMonitor outputGame on!

Introduction

Before we start...

If you're on a mobile device, keep in mind the code blocks will not render properly, resulting in "line wrapping" of code. I'd recommend viewing this on a PC or at least with "desktop mode" enabled on your device's browser.

I'm going to be assuming that you understand the very basics of Linux file manipulation, running scripts, etc. I won't be explaining all the basic things like how to open files in a terminal, how to save a Bash script and make it executable or how to run it. I'm going to tell you what you need, explain why and give you the code. It's up to you to plug it in and do your thing. If that sort of thing is outside your scope, you're gonna have a bad time with this.

 

KVM/VFIO

VFIO, or Virtual Function I/O, is a technology in the Linux kernel which exposes hardware devices inside the userspace in a secure IOMMU protected environment. So in the simplest of terms, VFIO allows you to pass your physical hardware directly to a virtual machine, rather than forcing your VM to emulate a particular type of hardware. Done properly, this yields near bare-metal performance for most applications (particularly gaming). We're talking upwards of 95-98% of spec with the right configuration.

 

What you need:

 

My specs:

I'll be writing this guide assuming you're running either Arch or Manjaro. It is entirely possible to make this work on any other distro but Arch is what I'm most comfortable with. Just replace the distro specific commands with the matching commands for your build (ie, pacman vs apt).

 

Things to keep in mind:
  1. Might be a good idea to make sure you have SSH running on your box. If you screw up (and it's rather likely on the first try), being able to quickly SSH in to unf*ck your build from another machine will save you a lot of hassle.
  2. Try to avoid using two identical GPUs. While it's possible to make this work, it's extremely tedious and honestly, just not worth the added hassle. If you already have a nice GPU that you're wanting to pass through (and assuming you don't have integrated), just buy a cheap card of the same brand on Amazon. This will save you a lot time, energy and money... and quite a bit of sanity.
  3. While not explicitly needed, the performance difference when dedicating an SSD entirely to your VM is 100% worth it. If you have a spare disk, I'd highly recommend going this route. You'll appreciate the difference.

 

Getting Started [Hardware]

Basic Hardware Topology

At this point, I went ahead and installed my second GPU. Might be worth noting that my primary monitor has multiple HDMI and DP ports to work with.

There are quite a few ways you can do this. You can set things up like I have demonstrated here, you can use a dummy plug on your guest GPU and just set up something like Looking Glass to read the frame buffer from memory, etc, etc. Having experimented a bit, it's just easier for me to do things this way. YMMV.

 

Getting Started [Software]

Note: From this point forward, I'm using my host GPU to give me output on monitor #1 and #2. The guest GPU is connected to monitor #1 but I have the source set to HDMI port #1 for the majority of this article. This will make more sense as we go along.

Update System

sudo pacman -Syu

Check IOMMU Grouping

This is perhaps the most important step. You absolutely have to be certain that your IOMMU grouping is perfect. If it's not, you will accomplish nothing and you VM will likely refuse to accept any passthrough devices.

Here is a simple Bash script that will give you all the valuable IOMMU information you seek.

The output of this script should look similar to this.

 

You'll more than likely have fewer groups than this and will need to patch your kernel for better IOMMU support. We'll get to that in a moment. For right now, you just need to take note of the hardware IDs for your intended passthrough GPU and its associated audio device. For example, since I'm planning on passing my 1660S, I would take note of the IDs 10de:21c4 and 10de:1aeb as seen above.

Notice that they belong to different groups? If yours belong to the same group, you need to patch your kernel.

 

ACS Kernel Patch (simplest method)
  1. mkdir build && cd build
  2. git clone https://aur.archlinux.org/linux-vfio.git
  3. makepkg -si

This should be pretty straightforward, albeit rather time consuming. Grab a beer and relax, it's gonna take a minute. Once the patch has successfully compiled for your kernel, you're good to go for now.

 

Grub Configuration

Assuming you're using Grub, go ahead and edit your Grub config (/etc/default/grub) by adding (not replacing!) the following values to GRUB_CMDLINE_LINUX_DEFAULT

If you're on Intel hardware, replace amd_iommu with intel_iommu and you're good to go.

Notice the pci.ids parameter? Remember our IOMMU script? The two IDs found here reflect the two IDs I told you to take note of earlier. Be sure to put your PCI IDs here.

Once you've got all these options set, go ahead and commit the changes.

sudo grub-mkconfig -o /boot/grub/grub.cfg

Reboot.

 

Confirm IOMMU Grouping

Assuming you were able to reboot with no issues, now would be a good time to run the IOMMU script again and verify that your grouping is 100% solid. You should now see your GPUs/audio devices/etc all residing in separate groups, similar to my results above. If not, confirm that your patch was built properly and that you have remembered to pass pcie_acs_override in your Grub config.

 

Isolating PCI Devices

This is where things can get a little tricky. You have to tell your host OS to leave your intended passthrough device alone and to only load up the vfio-pci kernel drivers for it. What you want to see when running lspci -nnk is something like this...

Note the Kernel driver in use output. What you'll likely see here is either the Nvidia proprietary drivers or the nouveau drivers loaded up on the device. We don't want that here. The way I fixed this was creating /etc/modprobe.d/vfio.conf and adding the following to the file. Be sure to use your own device IDs here.

Then, I modified /etc/mkinitcpio.conf to contain the following.

MODULES="vfio vfio_iommu_type1 vfio_pci vfio_virqfd"

Once these files were modified and saved, I ran sudo mkinitcpio -g /boot/linux-custom.img to regenerate initramfs and rebooted the system.

 

(Optional) Fixing audio kernel driver

Assuming your passthrough GPU VGA device is now showing vfio-pci as the kernel driver, you may still see something other than vfio-pci in use on your planned GPU passthrough audio device. If that's the case for you, here's how I managed to fix it. Simple Bash script to the rescue.

After running this, just check again and you should see it using the correct vfio-pci driver. You should only have to run this script once, as the changes persist through reboots.

 

Preparing QEMU

Installation

Go ahead and use pacaur to grab everything we need.

pacaur install qemu libvirt edk2-ovmf virt-manager

 

Adding your user to libvirt group

sudo gpasswd -a yourusername libvirt

or

sudo usermod -a -G libvirt yourusername

 

Enabling UEFI Support (VERY IMPORTANT!)

This is critical for GPU passthrough to work properly. It will NOT work in BIOS mode, period. You must install your Windows 10 VM using UEFI mode due to the way Nvidia handles VMs. If Nvidia notices we're in a VM, the passthrough device will fail to function. We can only hide the fact that we're in a VM when we use UEFI so it's imperative that we get this right.

On Arch, we store our OVMF files at /usr/share/ovmf/x64. Be sure to ls this directory and you should see a few files in *.fd format. If not, confirm that you have the edk2-ovmf package installed.

Open /etc/libvirtd/qemu.conf and add the following. This will add UEFI firmware support to QEMU.

 

Once you have that added, restart the libvirtd service.

sudo systemctl restart libvirtd && sudo systemctl status libvirtd

Make sure you check that the status is good to go. If you see any errors regarding nvram, go back and check your syntax in qemu.conf and be sure it matches my syntax above.

 

Installing Windows 10

Deploying Virtual Machine
  1. Open virt-manager

  2. Click New Virtual Machine

  3. Select Local install media (ISO image or CDROM)

  4. Browse to your Windows 10 ISO.

    1. Be sure to identify the OS as "Windows 10" at the bottom!
    2. Next.
  5. Allocate CPU and Memory.

    1. Be sure to reserve at least 2 cores for your HOST. I show "16 available" and I allocate 12.
    2. I give the guest 32GB of RAM out of my host's 64GB.
    3. Next.
  6. Ensure Enable storage for this virtual machine and Create a disk image for the virtual machine are checked.

    1. If you're passing through an entire SSD, just allot 2GB for now to finish the wizard. We'll fix it later.
    2. If you're wanting to just run the VM in the "normal" way, give it as much storage as you want, with a minimum of 50GB recommended.
    3. Next.
  7. DO NOT CLICK FINISH.

    1. Ensure Customize Configuration before install is checked.
    2. Give your VM a name.
    3. Under network selection, select Host device [your device ID] -> Source Mode -> Bridge).
    4. Now you can click Finish.

     

Configuring Virtual Machine

Now we'll get the VM configured the way we want for GPU passthrough. Assuming you're wanting to install directly to an SSD, begin at step 1. Otherwise, begin at step 2.

  1. Pass SSD device directly to VM

    1. Add Hardware -> Storage -> Select or create custom storage
    2. Device type: Disk device
    3. Bus type: VirtIO
    4. Click Manage...
    5. Click the + on the bottom left (Add Pool).
    6. Type: dir: Filesystem Directory
    7. Name it whatever you want, or leave it on the default of pool. Doesn't matter.
    8. Target Path: /dev
    9. Finish.
    10. Select the newly created pool on the left and locate the sd* device you want to install Windows on. In my case, this was sdb. Don't aim for partition numbers, just use the device. (ie, sdb and not sdb1)
    11. Make sure the device is selected and click Choose Volume
    12. Confirm that it's still set to VirtIO and raw, if so, click Finish.
    13. Go ahead and remove the 2GB disk that was added by the wizard by locating it on the left pane, right clicking and clicking Remove hardware
  2. Add VirtIO driver ISO

    1. Download the ISO from here and store it (preferably in the same pool) with your Win10 ISO
    2. Add Hardware -> Storage -> Select or create custom storage
    3. Device type: CDROM device
    4. Bus type: SATA
    5. Click Manage...
    6. Select the pool with your ISOs
    7. Select the virtio-win-0.1.171.iso you just downloaded.
    8. Click Choose Volume
    9. Finish.
  3. Networking via VirtIO

    1. Find the NIC on the left pane
    2. Make sure source mode is Bridge
    3. Device model: virtio
    4. Apply.
  4. Boot Options

    1. Select Boot Options on the left pane
    2. Check Enable boot menu
    3. Make sure the two CDROM drives and SSD are all selected.
    4. The order, from top to bottom should be SSD > CDROM 1 > CDROM 2
    5. Apply.
  5. Chipset/Firmware

    1. Select Overview on the left pane
    2. Chipset: Q35
    3. Firmware: UEFI ending in OVFM_CODE.fd
    4. Apply.
  6. CPU Configuration

    1. Select CPUs on the left pane

    2. Current allocation: Set this to the # of cores you told the wizard. In my case, it's 12.

    3. Ensure Copy host CPU configuration is not selected.

    4. Ensure Enable available CPU security flaw mitigations is not selected.

    5. Set Model to host-passthrough (you'll have to type it in directly)

    6. Under topology, assuming you have 12 "cores" allocated, set...

      1. Sockets to 1

      2. Cores to 6

      3. Threads to 2

        6*2=12

        Change the cores/threads to equal the amount of allocated "cores".

        Example, if it's 8 cores then Cores = 4 and Threads = 2

        4*2=8

        I may be off-base here so correct me if I'm wrong, but so far the performance

        has been nothing short of stellar with these settings.

         

      1. Apply.
    7. Add PCI Passthrough Devices

      1. Click Add Hardware
      2. Select PCI Host Device
      3. Scroll down until you find the GPU you want to pass through. The ID's should match your outputs from earlier. For example, in my case it's... 0000:25:00:0 NVIDIA Corporation TU116 [GeForce GTX 1660 Super]
      4. Finish. Then repeat the same process for the GPU's audio device from earlier.
    8. [OPTIONAL] Add USB game controllers

      1. Click Add Hardware
      2. Select USB Host Device
      3. Select any XBox controllers, joysticks, etc. that you want passed to the VM for gaming.
      4. Finish.
    9. Begin Installation

    10. Click the Begin Installation button at the top left of the window. Give it a minute to complete.

      1. Once completed, you can select your VM and start it.

     

Installing the OS

The install process is exactly the same as installing on bare metal. The only catch here is that we're (assuming you are, if not just do step 3.1) installing the OS to a passed through SSD. When you get to the screen where you would normally select your install partitions, you'll have nothing listed. To get the drive to show up...

  1. Click Load Driver

  2. Browse to the VirtIO ISO file, typically mounted to E:\

  3. Drill down through viostor > w10 > amd64 and let it run from there.

    1. Do the same process one more time but for the VirtIO network adapter we're passing through. That driver is NetKVM > w10 > amd64
  4. Once complete, you should see your SSD and can continue the install as normal.

 

OS Configuration (Post-Install)

Once you have the OS installed, the first thing you need to do is open up File Explorer, browse to the VirtIO ISO (again, likely mounted at E:\) and install the Guest Agent Service.

Once that completes, open up Device manager and begin installing the VirtIO drivers on every device with a yellow triangle. Just right click >Update Driver > Browse computer > Browse button > Select the VirtIO disc. Make sure "include subfolders" is checked and run through the prompts.

Only thing to note here is you should have two display devices. One will allow the install of the Red Hat QXL driver and the other will not. The one that doesn't is your passed through GPU.

Go ahead and get those VirtIO drivers installed and then shutdown the VM.

 

XML Configuration

Open up your VM's XML config with the following command, substituting vmname with the name you gave your VM in QEMU. These steps will allow the Nvidia drivers to install by hiding the fact we're operating inside a VM.

sudo EDITOR=nano virsh edit vmname

Somewhere inside the <features> tag, add the following parameters.

Inside the <hyperv> tags, add the following parameters.

Here's my XML config for reference, excluding the devices section for brevity.

 

Final Configurations

At this point, you're pretty much done. Just load up the VM and install the video drivers from Nvidia's website and reboot. Install all your apps/games/etc like normal.

 

Mouse support

Go ahead and open your VM's configuration screen and remove the "tablet" device. By doing this, you can click inside the running VM window and your mouse is captured for good. evdev is another alternative that you can look into.

To release the mouse back to the host, just tap L CTRL + L ALT (Left Control/Left Alt) and your mouse is now free to move on the host.

 

Monitor output

Now that you've got all needed the drivers installed and your monitor is connected to the GPU that's being passed through, inside the VM, open up Device Manager one more time. Under display adapters, right click on the QXL device and click disable. As soon as you do this, your little window into your VM on the host is no longer gonna work. Change the input source on your monitor to reflect the port/device that the GPU is connected to (in my case HDMI-2). As soon as you do this, boom, you're on your guest and everything should work as intended!

To get back to the host, just change the input back to whatever your host's GPU is using, press L CTRL + L ALT to free the mouse again and you're good to.

 

Game on!

Alright, now here's the fun part. Go do some gaming! You should see performance around the 95% mark, maybe a little more or less depending on your exact build/config. I'm running just a few FPS shy of what I was running natively, so nowhere even close enough to be "noticeable" in the sense of being inside a VM. You should be fine.

Enjoy!