Important note: Due to restrictions of the forum, there will be no further updates to this tutorial here. Instead I'm hosting the latest version of the tutorial here: https://heiko-sieger.info/running-windo ... ssthrough/. This tutorial has been rewritten for Linux Mint 19 !
I will be looking for posts here in the forum relating to the tutorial.
Change log:
05.12.2017 - Link to tutorial for Optimus laptop owners
13.08.2017 - Stream audio to Windows
03.08.2017 - fixed trouble shooting section - AMD Crimson driver / BSOD issue
25.07.2017 - simplified start script
16.07.2017 - replace old bridge configuration with Network Manager bridge configuration
15.07.2017 - update keyboard/mouse section; added link to new Ubuntu 16.04 tutorial
11.07.2017 - AMD Ryzen support; fixed OVMF upgrade procedure
08.04.2017 - added link to Optimus post; added link to techpowerup VGA BIOS database
28.03.2017 - added link to Windows 7 instructions
28.02.2017 - disable Network Manager instead of removing it
12.02.2017 - fixed dead link under hugepages section; changed heading; minor text editing
08.02.2017 - added link to VGA passthrough video tutorial on forum.level1techs.com; trouble shooting section on "dirty IOMMU" groups
07.02.2017 - Radeon graphics and reboot loop solved using "pc" option; disable hibernation, suspend, and fast startup in Windows VM
26.01.2017 - link to ACS override patch tutorial (post); -machine type=pc versus q35; collaborative effort call
10.01.2017 - determine IOMMU groups; added link to post on more drive configurations; added link to sample configuration post
28.12.2016 - added pulse audio configuration to start script; added link to glossary
20.12.2016 - troubleshooting section: added possible solution to AMD Crimson driver BSOD
15.12.2016 - troubleshooting section includes ACS override patch and Skylake issue; hardware requirements: varying level of device isolation support in Intel VT-d enabled CPUs
29.11.2016 - added note re selecting IGP; add iommu option to grub before testing for iommu support
15.11.2016 - added network bridge configuration; changed method of binding GPU to vfio-pci; removed fixed typo note; changed caption of part 10 and added link to tutorial
09.11.2016 - removed check for GPU UEFI support
23.10.2016 - added link to network bridge configuration under part 4; added this change log
21.10.2016 - fixed typo in part 4 (local.conf) that prevented vfio-pci driver to bind to graphics card; modified /etc/default/grub configuration
03.10.2016 - fixed an error in the VM startup script
I've run tests to eliminate errors, but mistakes are only human. Please post or PM me if you find that something needs fixing.
Collaborative Effort
The tutorial below is a collaborative effort. It lives from your comments and feedback, and I try to incorporate information from your posts within the tutorial.
About this Tutorial
In recent years a technology called "VGA passthrough" has found support in a number of virtualization solutions, including Xen, VMware (commercial), and KVM. VGA passthrough means that the physical graphics card is passed through to the guest operating system (e.g. Windows) and uses the graphics driver installed in the guest. This offers native or near native 2D and 3D graphics performance inside the guest OS.
The present tutorial describes how to install and run Windows 10 as a virtual machine (VM) with near native performance using KVM. If you want to switch from Windows to Linux, but like to play an occasional game on Windows, or run some other Windows applications, read on.
For instructions on how to use this tutorial to run Windows 7 as a virtual machine, see preparing a Windows 7 UEFI image.
Glossary
I've added a glossary of terms here: viewtopic.php?f=231&t=212692&p=1258103#p1258103.
Part 1 - Hardware Requirements
Your PC hardware must support IOMMU, in Intel jargon its called "VTd". AMD calls it variously "AMD Virtualization", "AMD-V", or "Secure Virtual Machine (SVM)". Even "IOMMU" has surfaced. If you plan to purchase a new PC/CPU, check the following websites for more information:
- AMD - http://products.amd.com/en-us and check the processor specs.
Most PC / motherboard manufacturers disable IOMMU by default. You will have to enable it in the BIOS. To check your current CPU / motherboard IOMMU support and enable it, do the following:
1. Reboot your PC and enter the BIOS setup menu (usually you press F2, DEL, or similar during boot to enter the BIOS setup).
2. Search for IOMMU, VT-d, SVM, or "virtualization technology for directed IO" or whatever it may be called on your system. Turn on IOMMU.
3. Save and Exit BIOS and boot into Linux.
4. Edit the /etc/default/grub file (you need root permission to do so). Here is mine before the edit:
- GRUB_DEFAULT=0
#GRUB_HIDDEN_TIMEOUT=10
#GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT_STYLE=countdown
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset"
GRUB_CMDLINE_LINUX=""
Intel CPU:
Code: Select all
intel_iommu=on
Code: Select all
amd_iommu=on
Code: Select all
sudo update-grub
On AMD machines use:
Code: Select all
dmesg | grep AMD-Vi
- ...
AMD-Vi: Enabling IOMMU at 0000:00:00.2 cap 0x40
AMD-Vi: Lazy IO/TLB flushing enabled
AMD-Vi: Initialized for Passthrough Mode
...
Code: Select all
cat /proc/cpuinfo | grep svm
Code: Select all
dmesg | grep "Virtualization Technology for Directed I/O"
- [ 0.902214] DMAR: Intel(R) Virtualization Technology for Directed I/O
In addition to a CPU and motherboard that supports IOMMU, you need two graphics processors (GPU):
1. One GPU for your Linux host (the OS you are currently running, I hope);
2. One GPU (graphics card) for your Windows guest.
We are building a system that runs two operating systems at the same time. Many resources like disk space, memory, etc. can be switched forth and back between the host and the guest, as needed. Unfortunately the GPU cannot be switched or shared between the two OS (work is being done to overcome this).
If, like me, you use Linux for the everyday stuff such as emails, web browsing, documents, etc., and Windows for gaming, photo or video editing, you'll have to give Windows a more powerful GPU, while Linux will run happily with an inexpensive GPU, or the integrated graphics processor (IGP).
In this tutorial I use UEFI to boot the Windows VM. That means that the graphics card you are going to use for the Windows guest must support UEFI - most newer cards do. You can check here if your video card and BIOS support UEFI. (For more information, see here: http://vfio.blogspot.com.au/2014/08/doe ... t-efi.html. Note that the test described there doesn't work for me anymore, since I can't copy the rom.)
Laptop users with Nvidia Optimus technology: User Misairu_G published an in-depth guide to VGA passthrough on laptops using Nvidia Optimus technology - see GUIDE Optimus laptop dGPU passthrough. Kudos! For reference, here some older discussions on the subject: viewtopic.php?f=231&t=212692&p=1300764#p1300634.
Part 2 - Installing Qemu / KVM
The Qemu emulator shipped with Linux Mint 18 to 18.3 is version 2.5 and supports almost all of the latest and the greatest KVM features.
We install kvm, qemu, and some other stuff we need or want:
Code: Select all
sudo apt-get update
sudo apt-get install qemu-kvm seabios qemu-utils hugepages bridge-utils
For AMD Ryzen, see here
Part 3 - Determining the Devices to Pass Through to Windows
We need to find the PCI ID(s) of the graphics card and perhaps other devices we want to pass through to the Windows VM. Normally the IGP (the GPU inside the processor) will be used for Linux, and the discrete graphics card for the Windows guest. My CPU does not have an integrated GPU, so I use 2 graphics cards. Here my hardware setup:
* GPU for Linux: Nvidia Quadro 2000 residing in the first PCIe graphics card slot.
* GPU for Windows: Nvidia GTX 970 residing in the second PCIe graphics card slot.
To determine the PCI bus number and PCI IDs, enter:
Code: Select all
lspci | grep VGA
- 01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
02:00.0 VGA compatible controller: NVIDIA Corporation Device 13c2 (rev a1)
Modern graphics cards usually come with an on-board audio controller, which we need to pass through as well. To find its ID, enter:
Code: Select all
lspci -nn | grep 02:00.
- 02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:13c2] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbb] (rev a1)
Now check to see that the graphics card resides within its own IOMMU group:
Code: Select all
find /sys/kernel/iommu_groups/ -type l
Code: Select all
for a in /sys/kernel/iommu_groups/*; do find $a -type l; done
- ...
/sys/kernel/iommu_groups/18/devices/0000:00:1f.3
/sys/kernel/iommu_groups/18/devices/0000:00:1f.2
/sys/kernel/iommu_groups/18/devices/0000:00:1f.0
/sys/kernel/iommu_groups/19/devices/0000:01:00.1
/sys/kernel/iommu_groups/19/devices/0000:01:00.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/20/devices/0000:02:00.1
/sys/kernel/iommu_groups/20/devices/0000:02:00.0
/sys/kernel/iommu_groups/21/devices/0000:05:00.0
/sys/kernel/iommu_groups/21/devices/0000:06:04.0
...
If your VGA card shares an IOMMU group with other PCI devices, you may need to either upgrade the kernel to 4.8, or compile the kernel with the ACS override patch. See more below under part 9 - troubleshooting.
Next step is to find the mouse and keyboard (USB devices) that we want to assign to the Windows guest. Remember, we are going to run 2 independent operating systems side by side, and we control them via mouse and keyboard.
****************************************************************************************************************************************************************************
Side note about keyboard and mouse:
Depending whether and how much control you want to have over each system, there are different approaches:
1. Get a USB-KVM (Keyboard/VGA/Mouse) switch. This is a small hardware device with usually 2 USB ports for keyboard and mouse as well as a VGA or (the more expensive) DVI or HDMI graphics outputs. In addition the USB-KVM switch has two USB cables and 2 VGA/DVI/HDMI cables to connect to two different PCs. Since we run 2 virtual PCs on one single system, this is viable solution.
Advantages:
- Works without special software in the OS, just the usual mouse and keyboard drivers;
- Best in performance - no software overhead.
Disadvantages:
- Requires extra (though inexpensive) hardware;
- More cable clutter and another box with cables on your desk;
- Requires you to press a button to switch between host and guest and vice versa;
- Need to pass through a USB port or controller - see below on IOMMU groups.
2. Without spending a nickel you can simply pass through your mouse and keyboard when the VM starts.
Advantages:
- Easy to implement;
- No money to invest;
- Good solution for setting up Windows and preparing it for solution no. 3.
Disadvantages:
- Once the guest starts, your mouse and keyboard only control that guest, not the host. You will have to plug them into another port to gain access to the host.
3. Synergy (http://symless.com/synergy/) is a commercial software solution that, once installed and configured, allows you to interact with two PCs or virtual machines.
Advantages:
- Most versatile solution, especially with dual screens;
- Software only, easy to configure;
- No hardware purchase required.
Disadvantages:
- Requires the installation of software on both the host and the guest;
- Doesn't work during Windows installation (see option 2);
- Costs $10 for a Basic, lifetime license;
- May produce lag, although I doubt you'll notice unless there is something wrong with the bridge configuration.
4. "Multi-device" bluetooth keyboard/mouse that can connect to two different devices and switch between them at the press of a button (see for example here):
Advantages:
- Most convenient solution;
- Same performance as option 1.
Disadvantages:
- Price.
- Make sure the device supports Linux, or that you can return it if it doesn't!
I first went with option 1 for robustness and universality, but have replaced it with option 4. I'm now using a Logitech MX master BT mouse and a Logitech K780 BT keyboard. See here for how to pair these devices to the USB dongles.
****************************************************************************************************************************************************************************
For the VM installation we choose option 2 (see above), that is we pass our keyboard and mouse through to the Windows VM. To do so, we need to identify their USB ID:
Code: Select all
lsusb
- ...
Bus 010 Device 006: ID 045e:076c Microsoft Corp. Comfort Mouse 4500
Bus 010 Device 005: ID 045e:0750 Microsoft Corp. Wired Keyboard 600
...
Part 4 - Prepare for Passthrough
We are assigning a dummy driver vfio-pci to the graphics card we want to use under Windows. To do that, we first have to prevent the default driver to bind to the graphics card.
Once more edit the /etc/default/grub file (you need root permission to do so).
The entry we are looking for is "GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"". We need to add one of the following options to this line, depending on your hardware:
Nvidia graphics card for Windows VM:
Code: Select all
modprobe.blacklist=nouveau
AMD graphics card for Windows VM:
Code: Select all
modprobe.blacklist=radeon
Code: Select all
modprobe.blacklist=amdgpu
After editing, my /etc/default/grub file now looks like this:
- GRUB_DEFAULT=0
#GRUB_HIDDEN_TIMEOUT=10
#GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT_STYLE=countdown
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="modprobe.blacklist=nouveau quiet intel_iommu=on"
GRUB_CMDLINE_LINUX=""
Code: Select all
sudo update-grub
In order to make the graphics card available to the Windows VM, we will assign a "dummy" driver as a place holder: vfio-pci.
Note: If you have two identical graphics cards for both the host and the VM, the method below won't work. In that case see the following posts: viewtopic.php?f=231&t=212692&start=40#p1174032 as well as viewtopic.php?f=231&t=212692&start=40#p1173262.
Go to the terminal window and enter:
Code: Select all
sudo -i
Code: Select all
gksudo xed /etc/modprobe.d/local.conf
Code: Select all
options vfio-pci ids=10de:13c2,10de:0fbb
Save the file and exit the editor.
Some applications like Passmark require the following option:
Code: Select all
echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf
Code: Select all
gksudo xed /etc/initramfs-tools/modules
Code: Select all
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
vhost-net
We need to update the initramfs. Enter at the command line:
Code: Select all
update-initramfs -u
Note: Bridging only works for wired networks. If your PC is connected to a router via a wireless link (Wifi), you won't be able to use a bridge. There are work-arounds such as routing or ebtables (see https://wiki.debian.org/BridgeNetworkCo ... reless_NIC).
a. Click the Network Manager icon near the bottom right of the panel, then select "Edit connections..."
b. Click "Add", then select "Bridge" from the drop-down menu. Now click "Create...".
c. Select the Bridge tab and click "Add".
d. From the drop-down menu, choose "Ethernet" as the connection type and click "Create...".
e. Under the Ethernet tab, select your network link next to the "Device:" specification. I have selected eno1 which is the fixed Ethernet link to my router. Click "Save" to save the configuration and to close the window.
f. Back in the "Editing Bridge connection 1" window, select the IPv4 Settings tab. If you wish to automatically assign an IP address every time you start your PC, you can click "Save" and close the window. Congratulations - you are done!
g. If, like me, you prefer to use a static IP, select "Manual" from the "Method:" drop-down menu. Fill in the IP address, network mask, gateway address, and the DNS server address. I also selected "Require IPv4 addressing for this connection to complete" (not sure this is needed). Once you are done, click "Save". Close the "Network connections" window.
h. Check the new configuration by clicking on the Network Manager icon, then select "Connection Information".
i. Now restart your PC to enable the new configuration.
For a step-by-step guide with screen shots see Define a network bridge using Ubuntu’s / Linux Mint’s Network Manager application.
Part 5 - Set up hugepages
This step is not required to run the Windows VM, but helps improve performance. First we need to decide how much memory we want to give to Windows. Here my suggestions:
1. No less than 4GB.
2. If you got 16GB total and aren't running multiple VMs, give Windows 8GB-12GB, depending on what you plan to do with Windows.
For this tutorial I use 8GB. Let's see what we got:
Code: Select all
hugeadm --explain
- hugeadm:ERROR: No hugetlbfs mount points found
Code: Select all
KVM_HUGEPAGES=1
After the reboot, in a terminal window, enter again:
Code: Select all
hugeadm --explain
- Total System Memory: 32180 MB
Mount Point Options
/run/hugepages/kvm rw,relatime,mode=775,gid=126
Huge page pools:
Size Minimum Current Maximum Default
2097152 0 0 0 *
...
Another way to determine the hugepage size is:
Code: Select all
grep "Hugepagesize:" /proc/meminfo
- Hugepagesize: 2048 kB
We want to reserve 8GB for Windows:
8GB = 8x1024MB = 8192MB
Our hugepage size is 2MB, so we need to reserve:
8192MB/2MB = 4096 hugepages
We need to add some % extra space for overhead (some say 2%, some say 10%), to be on the safe side I use 4500 (if memory is scarce, you should be OK with 4300, perhaps less - see comments here).
To configure the hugepage pool, open /etc/sysctl.conf as root:
Code: Select all
gksudo xed /etc/sysctl.conf
Code: Select all
# Set hugetables / hugepages for KVM single guest using 8GB RAM
vm.nr_hugepages = 4500
Now reboot.
After the reboot, run in a terminal:
Code: Select all
hugeadm --explain
- Total System Memory: 32180 MB
Mount Point Options
/run/hugepages/kvm rw,relatime,mode=775,gid=126
Huge page pools:
Size Minimum Current Maximum Default
2097152 4500 4500 4500 *
Huge page sizes with configured pools:
2097152
A /proc/sys/kernel/shmmax value of 9223372036854775807 bytes may be sub-optimal. To maximise
shared memory usage, this should be set to the size of the largest shared memory
segment size you want to be able to use. Alternatively, set it to a size matching
the maximum possible allocation size of all huge pages. This can be done
automatically, using the --set-recommended-shmmax option.
The recommended shmmax for your currently allocated huge pages is 9437184000 bytes.
To make shmmax settings persistent, add the following line to /etc/sysctl.conf:
kernel.shmmax = 9437184000
Code: Select all
sudo hugeadm --set-recommended-shmmax
Code: Select all
kernel.shmmax = 9437184000
Part 6 - Download OVMF BIOS and VFIO drivers
I chose to use UEFI to boot the Windows VM. There are some advantages to it, namely it starts faster and overcomes some limitations associated with legacy boot (Seabios).
First install OVMF:
Code: Select all
sudo apt-get install ovmf
Download the VFIO driver ISO to be used with the Windows installation from https://fedoraproject.org/wiki/Windows_Virtio_Drivers. Below are the direct links to the ISO images:
Latest VIRTIO drivers: https://fedorapeople.org/groups/virt/vi ... io-win.iso
Stable VIRTIO drivers: https://fedorapeople.org/groups/virt/vi ... io-win.iso
I chose the latest drivers.
Part 7 - Prepare disk and create startup script
We need some disk space on which to install the Windows VM. There are several choices:
1. Create a raw image file.
Advantages:
- Easy to implement;
- Flexible - the file can grow with your requirements;
- Snapshots;
- Easy migration;
- Good performance.
Disadvantages:
- Takes up the entire space you specify;
- Not sure how to access the file from the Linux host. (Note: I haven't tried this yet!)
2. Create a dedicated LVM volume for the Windows VM.
Advantages:
- Familiar technology (at least to me);
- Bare-metal performance (well, as close as it gets);
- Flexible - you can add physical drives to increase the volume size;
- Snapshots;
- Mountable within Linux host using kpartx.
Disadvantages:
- Takes up the entire space specified;
- Not sure about migration.
3. Pass through a PCI SATA controller / disk.
Advantages:
- Best performance, using original Windows drivers;
- Allows the use of Windows virtual drive features;
- Possibility to boot Windows directly, i.e. not as VM (haven't tried it).
Disadvantages:
- Host has no access to disk while VM is running;
- Requires dedicated SATA controller and drive(s) (all members of the IOMMU group need to be passed through);
- Offers less flexibility.
For further information on these and other image options, see here: https://en.wikibooks.org/wiki/QEMU/Images
Although I'm using an LVM volume, I suggest you start with the raw image. Let's create a raw disk image:
Code: Select all
fallocate -l 50G /media/user/win.img
Before we start the VM, it's best to check that we got everything:
KVM:
Code: Select all
kvm-ok
- INFO: /dev/kvm exists
KVM acceleration can be used
Code: Select all
lsmod | grep kvm
- kvm_intel 151552 0
kvm 479232 1 kvm_intel
VFIO:
Code: Select all
lsmod | grep vfio
- vfio_pci 36864 0
vfio_iommu_type1 20480 0
vfio 28672 2 vfio_iommu_type1,vfio_pci
Code: Select all
qemu-system-x86_64 --version
Did vfio load and bind the graphics card?
Code: Select all
dmesg | grep vfio
- [ 2.783062] vfio_pci: add [10de:13c2[ffff:ffff]] class 0x000000/00000000
[ 2.799068] vfio_pci: add [10de:0fbb[ffff:ffff]] class 0x000000/00000000
[ 8.935135] vfio-pci 0000:02:00.0: enabling device (0000 -> 0003)
Interrupt remapping:
Code: Select all
dmesg | grep VFIO
- [ 2.762829] VFIO - User Level meta-driver version: 0.3
If you get this message:
- vfio_iommu_type1_attach_group: No interrupt remapping support. Use the module param "allow_unsafe_interrupts" to enable VFIO IOMMU support on this platform
Code: Select all
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/vfio_iommu_type1.conf
I've created a script that will start the Windows VM. Copy the script below and safe it as windows10vm.sh (or whatever name you like, just keep the .sh extension):
Code: Select all
#!/bin/bash
vmname="windows10vm"
if ps -A | grep -q $vmname; then
echo "$vmname is already running." &
exit 1
else
# use pulseaudio
export QEMU_AUDIO_DRV=pa
export QEMU_PA_SAMPLES=8192
export QEMU_AUDIO_TIMER_PERIOD=99
export QEMU_PA_SERVER=/run/user/1000/pulse/native
cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
qemu-system-x86_64 \
-name $vmname,process=$vmname \
-machine type=q35,accel=kvm \
-cpu host,kvm=off \
-smp 4,sockets=1,cores=2,threads=2 \
-m 4G \
-mem-path /run/hugepages/kvm \
-mem-prealloc \
-balloon none \
-rtc clock=host,base=localtime \
-vga none \
-nographic \
-serial none \
-parallel none \
-soundhw hda \
-usb -usbdevice host:045e:076c -usbdevice host:045e:0750 \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-boot order=dc \
-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img \
-drive file=/home/user/ISOs/win10.iso,index=3,media=cdrom \
-drive file=/home/user/Downloads/virtio-win-0.1.140.iso,index=4,media=cdrom \
-netdev type=tap,id=net0,ifname=tap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01
exit 0
fi
Code: Select all
sudo chmod +x windows10vm.sh
-name $vmname,process=$vmname
Name and process name of the VM. The process name is displayed when using ps -A to show all processes, and used in the script to determine if the VM is already running. Don't use win10 as process name, for some inexplicable reason it doesn't work!
-machine type=q35,accel=kvm
This specifies a machine to emulate. The command option is not required to install or run Windows, but in my case it improved SSD read and write speed. See https://wiki.archlinux.org/index.php/QE ... too_slowly. In some cases type=q35 will prevent you from installing Windows, instead you may need to use type=pc,accel=kvm. See the post by member GregoriMarow here. To see all options for type=..., enter the following command:
Code: Select all
qemu-system-x86_64 -machine help
Code: Select all
-machine type=pc,accel=kvm
-cpu host,kvm=off
This tells qemu to emulate the host's exact CPU. There are more options, but it's best to stay with "host".
The kvm=off option is needed for Nvidia graphics cards - if you have an AMD/Radeon card for your Windows guest, you can remove that option and specify "-cpu host". Another option is to patch the Nvidia driver - see viewtopic.php?f=231&t=229122.
-smp 4,sockets=1,cores=2,threads=2
This specifies multiprocessing. "-smp 4" tells the system to use 4 (virtual) processors. My CPU has 6 cores, each supporting 2 threads, which makes a total of 12 threads. You cannot assign all CPU resources to the Windows VM - the host also needs some resources (remember that some of processing and I/O coming from the guest takes up CPU resources in the host). In the above example I gave Windows 4 virtual processors. "sockets=1" specifies the number of actual CPUs sockets qemu should assign, cores=2 tells qemu to assign 2 processor cores to the VM, and "threads=2" specifies 2 threads per core. It may be enough to simply specify "-smp 4", but I'm not sure about the performance consequences (if any).
If you have a 4-core Intel CPU, you can specify "-smp 6,sockets=1,cores=3,threads=2" to assign 75% of your CPU resources to the Windows VM. This should usually be enough even for demanding games.
-enable-kvm
This enables kvm. If you do not set this option, the Windows guest will run in qemu emulation mode and run slow. (Note: In the new syntax, accel=kvm or -machine accel=kvm has replaced this option - see here; but then there is very little documentation, so it seems best to just leave this option.)
-m 4G
The -m option assigns memory (RAM) to the VM, in this case 4GByte. Same as "-m 4096". You can increase or decrease it, depending on your resources and needs. With modern Windows releases it doesn't make sense to give it less than 4G, unless you are really stretched with RAM. Make sure your hugepage size matches this!
-mem-path /run/hugepages/kvm
This tells qemu where to find the hugepages we reserved. If you haven't configured hugepages, you need to remove this option.
-mem-prealloc
Preallocates the memory we assigned to the VM.
-balloon none
We don't want memory ballooning (as far as I know Windows won't support it anyway).
-rtc clock=host,base=localtime
"-rtc clock=host" tells qemu to use the host clock for synchronization. "base=localtime" allows the Windows guest to use the local time from the host system. Another option is "utc".
-vga none
Disables the built in graphics card emulation. You can remove this option for debugging.
-nographic
Totally disables SDL graphical output. For debugging purposes, remove this option if you don't get to the Tiano Core screen.
-serial none
-parallel none
Disable serial and parallel interfaces.
-soundhw hda
Together with the "export QEMU_AUDIO_DRV=pa" shell command, this option enables sound through PulseAudio. If you want to pass-through a sound card to Windows, see Streaming Audio from Linux to Windows.
-usb -usbdevice host:045e:076c -usbdevice host:045e:0750
"-usb" enables USB support and "-usbdevice host:..." assigns the USB host devices mouse (045e:076c) and keyboard (045e:0750) to the guest.
-device vfio-pci,host=02:00.0,multifunction=on
-device vfio-pci,host=02:00.1
We pass the second graphics card 02:00.0 to the guest, using vfio-pci. It is a multifunction device (graphics and sound). Make sure to pass through both the video and the sound part (02:00.1 in my case).
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd
Specifies the location and format of the bootable OVMF UEFI file. This file doesn't contain the variables, which are loaded separately (see right below).
-drive if=pflash,format=raw,file=/tmp/my_vars.fd
These are the variables for the UEFI boot file, which were copied by the script to /tmp/my_vars.fd.
-boot order=dc
Start boot from CD (d), then first hard disk (c). After installation of Windows you can remove the "d" to boot straight from HD.
-device virtio-scsi-pci,id=scsi
Load driver virtio-scsi-pci. This paravirtualized driver substantially improves disk I/O.
-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img
Defines the first hard disk. It will be accessed as a paravirtualized (if=virtio) drive in raw format.
Important: file=/... enter the path to your previously created win.img file.
Other options are file=/dev/mapper/group-vol for LVM volumes, or file=/dev/sdx1 for disks or partitions.
For more information on possible drive configurations and optimizations, see viewtopic.php?f=231&t=212692&start=180#p1263665
-drive file=/home/user/ISOs/win10.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd
This attaches the Windows win10.iso as CD or DVD. The driver used is the scsi-cd driver.
Important: file=/... enter the path to your Windows ISO image.
Note: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.
-drive file=/home/user/Downloads/virtio-win-0.1.112.iso,id=virtiocd,format=raw,if=none -device ide-cd,bus=ide.1,drive=virtiocd
This attaches the virtio ISO image as CD onto the ide.1 bus. The driver used is the ide-cd driver.
Important: file=/... enter the path to your virtio ISO image. If you downloaded it to the default location, it should be in your Downloads directory.
Note: There are many ways to attach ISO images or drives and invoke drivers. My system didn't want to take a second scsi-cd device, so this option did the job. Unless this doesn't work for you, don't change.
Note 2: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.
-netdev type=tap,id=net0,ifname=tap0,vhost=on
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01
Defines the network and network driver.
See here for some information: http://www.linux-kvm.com/content/how-ma ... -vhost-net
For performance tests, see http://www.linux-kvm.org/page/10G_NIC_p ... _vs_virtio
For a good explanation on the qemu-system-x64 options, see https://www.suse.com/documentation/sles ... vices.html and https://wiki.archlinux.org/index.php/QEMU.
User driz shared his/her configuration in this post.
Part 8 - Install Windows
Start the VM by running the script as root:
Code: Select all
sudo ./windows10vm.sh
You should get a Tiano Core splash screen with the memory test result.
Then the Windows ISO starts to boot and asks you to:
- Press any key to start the CD / DVD...
Windows will then ask you to:
- Select the driver to install
You will be prompted again to select a driver, now go to "vioscsi" and select again the AMD64 version for your Windows release.
Windows will ask for the license key, and you need to specify how to install - choose "Custom". Then select your drive (there should be only disk0) and install.
Windows will reboot several times. When done rebooting, open Device Manager and select the Network interface. Right-click and select update. Then browse to the VFIO disk and install NetKVM.
Windows should be looking for a display driver by itself. If not, install it manually.
Note: In my case, Windows did not correctly detect my drives being SSD drives. Not only will Windows 10 perform unnecessary disk optimization tasks, but these "optimizations" can actually lead to reduced SSD life and performance issues. To make Windows 10 determine the correct disk drive type, do the following:
1. Inside Windows 10, right-click the Start menu.
2. Select "Command prompt (admin)".
3. At the command prompt, run:
Code: Select all
winsat formal
5. To display the WEI, press WIN+R and enter:
Code: Select all
shell:Games
To check that Windows correctly identified your SSD:
1. Open Explorer
2. Click "This PC" in the left tab.
3. Right-click your drive (e.g. C:) and select "Properties".
4. Select the "Tools" tab.
5. Click "Optimize"
You should see something similar to this: In my case, I have drive C: (my Windows 10 system partition) and a "Recovery" partition located on an SSD, the other two partitions ("photos" and "raw_photos") are using regular hard drives (HDD). Notice the "Optimization not available" .
Turn off hibernation and suspend ! Having either of them enabled can cause your Windows VM to hang, or may even affect the host. To turn off hibernation and suspend, follow the instructions for hibernation and suspend.
Turn off fast startup ! When you shut down the Windows VM, fast startup leaves the file system in a state that is unmountable by Linux. If something goes wrong, you're screwed. NEVER EVER let proprietary technology have control over your data. Follow these instructions to turn off fast startup.
By now you should have a working Windows VM with VGA passthrough. Please install and run Passmark 8 and post the results here: http://forums.linuxmint.com/viewtopic.p ... 5&t=153482
Part 9 - Troubleshooting
A common issue is the binding of a driver to the graphics card we want to pass through. As I was writing this how-to and made changes to my (previously working) system, I suddenly couldn't start the VM anymore. The first thing to check if you don't get a black Tianocore screen is whether or not the graphics card you try to pass through is bound to the vfio-pci driver:
Code: Select all
dmesg | grep -i vfio
- [ 2.735931] VFIO - User Level meta-driver version: 0.3
[ 2.757208] vfio_pci: add [10de:13c2[ffff:ffff]] class 0x000000/00000000
[ 2.773223] vfio_pci: add [10de:0fbb[ffff:ffff]] class 0x000000/00000000
[ 8.437128] vfio-pci 0000:02:00.0: enabling device (0000 -> 0003)
Code: Select all
lspci -k | grep -i -A 3 vga
- 01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
Subsystem: NVIDIA Corporation GF106GL [Quadro 2000]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_361
--
02:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
Subsystem: Gigabyte Technology Co., Ltd GM204 [GeForce GTX 970]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_361
Graphics card 02:00.0 (GTX 970) also uses the Nvidia driver - that is NOT what I was hoping for. This card should be bound to the vfio-pci driver. So what do we do?
Click the menu button, click "Control Center", then click "Driver Manager" in the Administration section. Enter your password. You will then see the drivers associated with the graphics cards. Change the driver of the graphics card so it will use the opensource driver (in this example "Nouveau") and press "Apply Changes". After the change, it should look similar to the photo below:
BSOD when installing AMD Crimson drivers under Windows
Several users on the Redhat VFIO mailing list have reported problems with the installation of AMD Crimson drivers under Windows. This seems to affect a number of AMD graphics cards, as well as a number of different AMD Crimson driver releases. A workaround is described here: https://www.redhat.com/archives/vfio-us ... 00153.html
The work-around is described here: viewtopic.php?f=231&t=212692&p=1349943#p1349943
If you can't start the Windows ISO, it may be necessary to run a more recent version of Qemu to get features or work-arounds that solve problems. If you require a more updated version of Qemu (version 2.6.1 as of this writing), add the following PPA (warning: this is not an official repository - use at your own risk). At the terminal prompt, enter:
Code: Select all
sudo add-apt-repository ppa:jacob/virtualisation
https://www.kraxel.org/repos/jenkins/edk2/
Download the latest edk2.git-ovmf-x64 file, in my case it was "edk2.git-ovmf-x64-0-20160914.b2137.gf8db652.noarch.rpm" for a 64bit installation. Open the downloaded .rpm file with root privileges and unpack to /.
Copy the following files:
Code: Select all
sudo cp /usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd /usr/share/ovmf/OVMF.fd
Code: Select all
sudo cp /usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd /usr/share/OVMF/OVMF_CODE.fd
Code: Select all
sudo cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /usr/share/OVMF/OVMF_VARS.fd
Code: Select all
sudo ln -s '/usr/share/ovmf/OVMF.fd' '/usr/share/qemu/OVMF.fd'
Some motherboard BIOSes have bugs and prevent passthrough. Use "dmesg" and look for entries like these:
Code: Select all
[ 0.297481] [Firmware Bug]: AMD-Vi: IOAPIC[7] not in IVRS table
[ 0.297485] [Firmware Bug]: AMD-Vi: IOAPIC[8] not in IVRS table
[ 0.297487] [Firmware Bug]: AMD-Vi: No southbridge IOAPIC found in IVRS table
[ 0.297490] AMD-Vi: Disabling interrupt remapping due to BIOS Bug(s)
For users of Intel CPUs with IGD (Intel graphics device): The Intel i915 driver has a bug, which has necessitated a kernel patch named i915 vga arbiter patch. According to developer Alex Williamson, this patch is needed any time you have host Intel graphics and make use of the x-vga=on option. This tutorial, however, does NOT use the x-vga option; the tutorial is based on UEFI boot and doesn't use VGA. That means you do NOT need the i915 vga arbiter patch! See http://vfio.blogspot.com/2014/08/primar ... t-vga.html.
When checking the IOMMU groups, your graphics card' video and audio part should be the only 2 entries under the respective IOMMU group. The same goes for any other PCI device you want to pass through, as you must pass through all devices within an IOMMU group, or nothing. If there are other devices within the same IOMMU group that you do not wish to pass through, or devices that are used by the host, you have the following options:
a) If the only device beside the PCI device you intend to pass through is a host controller, you may be able to disregard this device. Do NOT try to pass through a host controller! Continue with the instructions in the tutorial and cross your fingers - it might work.
b) If two graphics cards or add-on PCI cards appear in the same IOMMU, try to move one of your graphics card/PCI card to another slot (which may be bound to a different IOMMU group). If that is not possible or doesn't produce the expected results, see the following options.
c) Upgrade your kernel to kernel 4.8. First, backup your entire Linux installation! Open Update Manager, then open View --> Linux kernels. Read the warning and choose Continue. Select a 4.8 kernel. Reboot after installation and check the IOMMU groups again.
d) If none of the above steps work, you will need to compile the kernel using the ACS override patch. Forum member odtech has posted detailed instructions on how to compile and use the ACS override patch here. Make sure you are running the correct kernel, the one specified with the ACS patch! For further information, see http://vfio.blogspot.co.il/2014/08/vfiovga-faq.html and http://vfio.blogspot.co.il/2014/08/iomm ... d-out.html.
Problems with dual-graphics laptops:
A quote from Alex Williamson:
Another issue has come up with Intel Skylake CPUs. See https://lkml.org/lkml/2016/3/31/1112 for an available patch. For possible solution see https://teksyndicate.com/2015/09/13/wen ... -i7-6700k/.OVMF is the way to go if you want to avoid patching your kernel, ... if your GPU and guest OS support UEFI.
Dual-graphics laptops are tricky. There are no guarantees that any of this will work, but especially custom graphics cards on laptops. The discrete GPU may not be directly connected to any of the outputs, so "remoting" the graphics internally may be the only way to get to the guest desktop. It's possible that the GPU does not have a discrete ROM, instead hiding it in ACPI or elsewhere to be extracted. Some hybrid graphics laptops require custom drivers from the vendor. The more integration it has into the system, probably the less likely that it will behave like a discrete desktop GPU.
If you haven't found a solution to your problem, check the references in part 12. You are also welcome to post here in this thread so someone can jump in and help out.
Part 10 - Run Windows VM in user mode (non-root)
The following tutorial explains how to run the Windows VM in unprivileged mode: https://www.evonide.com/non-root-gpu-passthrough-setup/. I haven't tried it, though.
Part 11 - Passing more PCI devices to guest
If you wish to pass additional PCI devices through to your Windows guest, you must make sure that you pass through all PCI devices residing under the same IOMMU group. Moreover, DO NOT PASS root devices to your guest. To check which PCI devices resider under the same group, use the following command:
Code: Select all
find /sys/kernel/iommu_groups/ -type l
Code: Select all
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/4/devices/0000:00:05.0
/sys/kernel/iommu_groups/4/devices/0000:00:05.2
/sys/kernel/iommu_groups/4/devices/0000:00:05.4
/sys/kernel/iommu_groups/5/devices/0000:00:11.0
/sys/kernel/iommu_groups/6/devices/0000:00:16.0
/sys/kernel/iommu_groups/7/devices/0000:00:19.0
/sys/kernel/iommu_groups/8/devices/0000:00:1a.0
/sys/kernel/iommu_groups/9/devices/0000:00:1c.0
/sys/kernel/iommu_groups/10/devices/0000:00:1c.1
/sys/kernel/iommu_groups/11/devices/0000:00:1c.2
/sys/kernel/iommu_groups/12/devices/0000:00:1c.3
/sys/kernel/iommu_groups/13/devices/0000:00:1c.4
/sys/kernel/iommu_groups/14/devices/0000:00:1c.7
/sys/kernel/iommu_groups/15/devices/0000:00:1d.0
/sys/kernel/iommu_groups/16/devices/0000:00:1e.0
/sys/kernel/iommu_groups/17/devices/0000:00:1f.0
/sys/kernel/iommu_groups/17/devices/0000:00:1f.2
/sys/kernel/iommu_groups/17/devices/0000:00:1f.3
/sys/kernel/iommu_groups/18/devices/0000:01:00.0
/sys/kernel/iommu_groups/18/devices/0000:01:00.1
/sys/kernel/iommu_groups/19/devices/0000:02:00.0
/sys/kernel/iommu_groups/19/devices/0000:02:00.1
/sys/kernel/iommu_groups/20/devices/0000:05:00.0
/sys/kernel/iommu_groups/20/devices/0000:06:04.0
...
Code: Select all
lspci -nn | grep 00:1f.
- 00:1f.0 ISA bridge [0601]: Intel Corporation C600/X79 series chipset LPC Controller [8086:1d41] (rev 05)
00:1f.2 SATA controller [0106]: Intel Corporation C600/X79 series chipset 6-Port SATA AHCI Controller [8086:1d02] (rev 05)
00:1f.3 SMBus [0c05]: Intel Corporation C600/X79 series chipset SMBus Host Controller [8086:1d22] (rev 05)
- The ISA bridge is a standard device used by the host. You do not pass it through to a guest!
- All my drives are controlled by the host, so passing through a SATA controller would be a very bad idea!
- Do NOT pass through a host controller, such as the C600/X79 series chipset SMBus Host Controller!
In order to pass through individual PCI devices, edit the VM startup script and insert the following code underneath the vmname=... line:
Code: Select all
configfile=/etc/vfio-pci.cfg
vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
}
Code: Select all
cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
done
Code: Select all
0000:00:1a.0
0000:08:00.0
Part 12 - References
For documentation on qemu/kvm, see the following directory on your Linux machine:
Code: Select all
/usr/share/doc/qemu-system-common
https://bbs.archlinux.org/viewtopic.php?id=162768 - this gave me the inspiration - the best thread on kvm!
http://ubuntuforums.org/showthread.php?t=2266916 - Ubuntu tutorial.
https://wiki.archlinux.org/index.php/QEMU - Arch Linux documentation on QEMU - by far the best.
http://vfio.blogspot.com/2014/08/vfiovga-faq.html - one of the developers, Alex provides invaluable information and advice.
http://vfio.blogspot.com/2014/08/primar ... t-vga.html
http://www.linux-kvm.org/page/Tuning_KVM - Redhat is the key developer of kvm, their website has lots of information.
https://wiki.archlinux.org/index.php/KVM - Arch Linux KVM page.
https://wiki.archlinux.org/index.php/PC ... h_via_OVMF - PCI passthrough via OVMF tutorial for Arch Linux.
https://www.suse.com/documentation/sles ... k_kvm.html - Suse Linux documentation on KVM - good reference.
https://www.evonide.com/non-root-gpu-passthrough-setup/ - haven't tried it, but looks like a good tutorial.
https://forum.level1techs.com/t/gta-v-o ... ough/87440 - tutorial with Youtube video to go along, very useful and up-to-date, including how to apply ACS override patch.
Below is the VM startup script I use, for reference only.
Note: The script is specific for my hardware.
Code: Select all
#!/bin/bash
configfile=/etc/vfio-pci.cfg
vmname="win10vm"
vfiobind() {
dev="$1"
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
}
if ps -A | grep -q $vmname; then
zenity --info --window-icon=info --timeout=15 --text="$vmname is already running." &
exit 1
else
cat $configfile | while read line;do
echo $line | grep ^# >/dev/null 2>&1 && continue
vfiobind $line
done
# use pulseaudio
export QEMU_AUDIO_DRV=pa
export QEMU_PA_SAMPLES=8192
export QEMU_AUDIO_TIMER_PERIOD=99
export QEMU_PA_SERVER=/run/user/1000/pulse/native
#use ALSA
#export QEMU_ALSA_DAC_BUFFER_SIZE=512
#export QEMU_ALSA_DAC_PERIOD_SIZE=170
#export QEMU_AUDIO_DRV=alsa
cp /usr/share/OVMF/OVMF_VARS.fd /tmp/my_vars.fd
chown kvm:kvm /tmp/my_vars.fd
taskset -c 0-9 qemu-system-x86_64 \
-serial none \
-parallel none \
-nodefaults \
-nodefconfig \
-name $vmname,process=$vmname \
-machine q35,accel=kvm,kernel_irqchip=on,mem-merge=off \
-cpu host,kvm=off,hv_vendor_id=1234567890ab,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \
-smp 10,sockets=1,cores=5,threads=2 \
-m 20G \
-mem-path /run/hugepages/kvm \
-mem-prealloc \
-balloon none \
-rtc base=localtime,clock=host \
-vga none \
-nographic \
-soundhw hda \
-device vfio-pci,host=02:00.0,multifunction=on \
-device vfio-pci,host=02:00.1 \
-device vfio-pci,host=00:1a.0 \
-device vfio-pci,host=08:00.0 \
-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-boot order=c \
-drive id=disk0,if=virtio,cache=none,format=raw,aio=native,file=/dev/mapper/lm13-win10 \
-drive id=disk1,if=virtio,cache=none,format=raw,aio=native,file=/dev/mapper/photos-photo_stripe \
-drive id=disk2,if=virtio,cache=none,format=raw,aio=native,file=/dev/mapper/media-photo_raw \
-netdev type=tap,id=net0,ifname=vmtap0,vhost=on \
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:0e:01
exit 0
fi