HOW-TO make dual-boot obsolete using kvm VGA passthrough

Questions about virtualization software
Forum rules
Before you post please read how to get help
Post Reply
LittleJoey
Level 1
Level 1
Posts: 4
Joined: Fri Aug 12, 2016 11:52 pm

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by LittleJoey » Fri Aug 12, 2016 11:59 pm

First let me thank everyone for doing such an awesome job on this guide, it's amazing! However, I'm completely stumped at this point and hope that someone will be able to provide a nudge in the right direction as I am new to KVM.

Like an older post, when I start the KVM I see the Tiano Core screen, but rather than a memory test, it goes to an iPXE network boot. I've posted my script below, and I think it looks right, but cannot for the life of me get it to go to anything other than the iPXE screen. After some time, it does move forward to a UEFI interactive shell, but messing around in there doesn't seem to do anything either.

Code: Select all

qemu-system-x86_64 \
  -name $vmname,process=$vmname \
  -cpu host,kvm=off \
  -smp 6,sockets=1,cores=3,threads=2 \
  -enable-kvm \
  -m 8G \
  -mem-path /run/hugepages/kvm \
  -mem-prealloc \
  -balloon none \
  -rtc clock=host,base=localtime \
  -vga none \
  -nographic \
  -serial none \
  -parallel none \
  -soundhw hda \
  -device vfio-pci,host=01:00.0,multifunction=on \
  -device vfio-pci,host=01:00.1 \ 
  -device vfio-pci,host=00:1d.0 \ 
  -boot order=dc \ 
  -drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
  -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
  -device virtio-scsi-pci,id=scsi \
  -drive id=disk0,if=virtio,cache=none,format=raw,file=/mnt/data/win.img \
  -drive file=/mnt/data/win8.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \
  -drive file=/mnt/data/virtio-win-0.1.118.iso,id=virtiocd,format=raw,if=none -device ide-cd,bus=ide.1,drive=virtiocd \
  
Inside the UEFI mapping table, it shows BLK#: Alias(s): 3 SCSI controllers (2 cd roms) and a PCI controller, so it looks like the UEFI shell sees all the correct drives but just isn't recognizing that the windows iso is a bootable image.

Update: Tried the alternative OVMF files - still having the same result.

Update 2: My windows install iso's were corrupted. Downloaded a fresh iso and it booted up just fine.
Last edited by LittleJoey on Mon Aug 15, 2016 6:16 pm, edited 1 time in total.

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Sun Aug 14, 2016 5:29 pm

@LittleJoey:

EDIT: Remove the last "\" in the script!

Couldn't find anything except the missing machine type option:

Code: Select all

-machine type=q35,accel=kvm \
You should add that and try again, though qemu is supposed to use a default if not specified.

Make sure your path names are correct, and that you copied the OVMF file to the right place.

The Win 8 ISO image must be UEFI bootable. I never tried WIn8 so I can't help much on that. In theory it should support UEFI, unlike Windows 7.

If you can't get UEFI boot to work, you can always try the SEABIOS way. See my links in the tutorial.

Ah, of course your graphics card must support UEFI and your CPU must support Vt-d (Intel) or IOMMU / AMD-V (AMD). Without these features it won't work.
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

LittleJoey
Level 1
Level 1
Posts: 4
Joined: Fri Aug 12, 2016 11:52 pm

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by LittleJoey » Sun Aug 14, 2016 9:23 pm

Thanks for the advice, @powerhouse.

Big Edit:
Wow I feel like a dummy... I went to work, downloaded 8.1 on their network, brought it home, tried that iso and guess what: it works just fine... So, yeah, when in doubt, verify your boot media I suppose would be the lesson here today, although I did learn a LOT more than I did 2 days ago after going through your tutorial the first time.

Thanks again!!!

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Mon Aug 15, 2016 9:20 pm

LittleJoey wrote:Thanks for the advice, @powerhouse.

Big Edit:
Wow I feel like a dummy... I went to work, downloaded 8.1 on their network, brought it home, tried that iso and guess what: it works just fine... So, yeah, when in doubt, verify your boot media I suppose would be the lesson here today, although I did learn a LOT more than I did 2 days ago after going through your tutorial the first time.

Thanks again!!!
Great news!

Could you do me a favour and run the following benchmark: viewtopic.php?f=231&t=197754. Please publish it in that thread.

Thanks!
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

Ludo_Kressh
Level 1
Level 1
Posts: 7
Joined: Sat Jul 30, 2016 3:07 am

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by Ludo_Kressh » Tue Aug 16, 2016 12:04 am

Thanks for sharing your solution. I think it would have worked with the script in my tutorial, once the syntax error is fixed. Could you test that?

I tried it today without the syntax error and with modifying the /etc/modules the vfio driver load first and it works perfectly.

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Tue Aug 16, 2016 11:19 am

Ludo_Kressh wrote:
Thanks for sharing your solution. I think it would have worked with the script in my tutorial, once the syntax error is fixed. Could you test that?

I tried it today without the syntax error and with modifying the /etc/modules the vfio driver load first and it works perfectly.
How much damage a little " can do - unbelievable! Thanks for testing and reporting back.

Care to share some benchmarks? See 2 posts above for a link. I try to establish a list of benchmarks to compare. This might be helpful to identify problems or performance issues.
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Mon Aug 22, 2016 9:23 am

Please note there is a problem with Intel Skylake processors that may require a workaround. See http://www.serverphorums.com/read.php?12,1453397 and https://ubuntuforums.org/showthread.php?t=2329053 (here you find a patch that can be used with Ubuntu 16.04, which should work with Linux Mint 18).
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

daedra86
Level 1
Level 1
Posts: 1
Joined: Thu Aug 25, 2016 6:14 am

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by daedra86 » Thu Aug 25, 2016 6:21 am

Greetings!
My system - Ubuntu 16.04.
I use a similar configuration with PCI passthrought.HOTS and WOW are working fine.
The problem with usb passthrought.
Through virt-manager add a usb mouse and a usb keyboard, run the virtual machine, and the vm does not see them.
Then I pull them out of the USB port, re-insert through virt-manager add a usb mouse and a usb keyboard, and now vm sees them.
After turning off the vm, repeated all over again.
I do not know to whom to address this problem.
3 months ago, it was not a problem.
Thank you.

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Thu Aug 25, 2016 12:21 pm

daedra86 wrote:Greetings!
My system - Ubuntu 16.04.
I use a similar configuration with PCI passthrought.HOTS and WOW are working fine.
The problem with usb passthrought.
Through virt-manager add a usb mouse and a usb keyboard, run the virtual machine, and the vm does not see them.
Then I pull them out of the USB port, re-insert through virt-manager add a usb mouse and a usb keyboard, and now vm sees them.
After turning off the vm, repeated all over again.
I do not know to whom to address this problem.
3 months ago, it was not a problem.
Thank you.
I'm not using virt-manager and the few times I tried virt-manager it created more problems than it actually solved. I know that some other tutorials use virt-manager, then hack the xml file to get things working.

In my PC I pass through a USB controller (I have several USB controllers) using PCI passthrough, much the same as VGA passthrough. After that all the USB ports on that controller/hub are directly accessible by Windows (I need multiple USB ports in Windows, so having a dedicated controller is worth it).
My mouse and keyboard are connected to a KVM switch (Keyboard, Video, Mouse switch) which is connected to the passed through USB port of the Windows VM as well as to a USB port controlled by Linux (there are two USB cables from the switch to the PC). KVM switches are made to control two or more PCs with one set of keyboard/mouse/screen - exactly what I need.

If you do not want to invest in additional hardware, you still got several options:

1. Use the -usb option in the qemu command such as:

Code: Select all

-usb -usbdevice host:045e:076c -usbdevice host:045e:0750
See part 3 and part 7 of my tutorial.

2. Use Synergy (see part 3 of my tutorial).

I have no idea how virt-manager accomplishes things - the documentation I had found was lacking, and as a result I have given up on virt-manager altogether. I take it from you that things haven't improved. Sorry, I can't help with virt-manager. But option 2 above may provide a good solution, even if you use virt-manager. In that case you would need to install synergy in both Linux and Windows, configure it, then disable USB passthrough in virt-manager. Synergy uses the network communication (IP) between the VM and Linux to communicate with the synergy server which controls the mouse movement. For more info, see http://symless.com/synergy/.

P.S.: I just noticed that Synergy has become a commercial project. If it still works as it did, I think the $10 spent is a good investment.

EDIT: I assume you followed this tutorial when using virt-manager to assign your host device to the guest: http://www.linux-kvm.org/page/USB_Host_ ... d_to_Guest
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Sun Aug 28, 2016 4:13 am

Nvidia card owners, please pay attention: Nvidia drivers have a "bug" that "accidentally" tests for the presence of a hypervisor. If a hypervisor is present, the driver will quit with Error 43 or blue screen.

This doesn't happen to the Nvidia Quadro 2000 and above professional graphics cards (that cost a little fortune). A workaround is to disable hypervisor extensions, which is what I have done in my qemu startup script:

Code: Select all

-cpu host,kvm=off
The kvm=off option comes at a certain performance cost. (See https://patchwork.ozlabs.org/patch/355005/.)

So far I couldn't really quantify this performance cost. When I ran a Xen hypervisor with an Nvidia Quadro 2000 GPU (one that is supported in a VM) and Windows 7 the system felt snappier. I ran a number of benchmarks, and while some benchmarks show indeed a better performance with the Xen setup, others show some advantages to my current kvm setup.

My current hardware, however, differs in a few important components such as the graphics card (Quadro 2000/Xen versus GTX 970/KVM) and the SSD (Sandisk Extreme 120GB/Xen versus Samsung EVO 850 250GB/KVM) as well as the OS (Windows 7 Pro 64/Xen versus Windows 10 Pro 64/KVM) which prevent me from directly comparing benchmarks.

The good news is that Alex (I believe) has written a Windows-side patch to fix the Nvidia bug :-).

First of all, the following Arch Linux page is an excellent source of information on KVM VGA passthrough: https://wiki.archlinux.org/index.php/PC ... h_via_OVMF. See https://wiki.archlinux.org/index.php/PC ... indows_VMs for specifics on the Nvidia Error 43 issue.

The Windows side patch can be downloaded here: https://github.com/sk1080/nvidia-kvm-patcher.

The author provides a warning about test-signing drivers, so please pay special attention as this can open a security hole in your Windows 10 VM.

I haven't tried this patch yet. First I need to backup my Windows VM !!! Once I get to try it, I will report here.
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Mon Aug 29, 2016 1:12 pm

This is the continuation of my previous post above.

After Microsoft fixed the hash mismatch issue, I'm downloading the WDK 10 package. Cross my fingers...

Update: It works!

However, I was somewhat hoping that the performance difference would be quantifyable, which I cannot confirm. It does seem to have some influence on performance, but other factors are more dramatic.

Here is my new qemu command script:

Code: Select all

qemu-system-x86_64 \
  -serial none \
  -parallel none \
  -nodefaults \
  -nodefconfig \
  -enable-kvm \
  -name $vmname,process=$vmname \
  -machine q35,accel=kvm,kernel_irqchip=on,mem-merge=off \
  -cpu host,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff \
  -smp 10,sockets=1,cores=5,threads=2 \
  -m 20G \
  -mem-path /run/hugepages/kvm \
  -mem-prealloc \
  -balloon none \
  -rtc base=localtime,clock=host \
  -vga none \
  -nographic \
  -soundhw hda \
  -device vfio-pci,host=02:00.0,multifunction=on \
  -device vfio-pci,host=02:00.1 \
  -device vfio-pci,host=00:1a.0 \
  -device vfio-pci,host=08:00.0 \
  -drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
  -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
  -boot order=c \
  -drive id=disk0,if=virtio,cache=none,format=raw,aio=native,file=/dev/mapper/lm13-win10 \
  -netdev type=tap,id=net0,ifname=tap0,vhost=on \
  -device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01
The above script is NOT for installing Windows in the VM, but rather to boot an already installed Windows 10 VM. I will write the exact steps to modify the Nvidia driver in Windows in a separate post.
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Sun Sep 04, 2016 9:26 am

I've written a little how-to on how to patch the Nvidia driver in Windows 10. See viewtopic.php?f=231&t=229122.

All credit should go to sk1080 who wrote the script and provided the instructions. Great work !!!
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Thu Sep 15, 2016 5:25 am

For reference: This is the first version of my how-to, written for Linux Mint 17.3.


This tutorial is about running Windows inside a virtual machine (VM) with near native performance. If you want to switch from Windows to Linux, but like to play an occasional game on Windows, run Adobe Photoshop or Lightroom, or some other Windows applications, read on.


Part 1 - Hardware Requirements

Your PC hardware must support IOMMU, in Intel jargon its called "VTd". AMD calls it variously "AMD Virtualization", "AMD-V", or "Secure Virtual Machine (SVM)". Even "IOMMU" has surfaced. If you plan to purchase a new PC/CPU, check the following websites for more information:
  1. Wikipedia - https://en.wikipedia.org/wiki/List_of_I ... g_hardware
If you like to check your current CPU / motherboard IOMMU support, do the following:

1. Reboot your PC and enter the BIOS setup menu (usually you press F2, DEL, or similar during boot to enter the BIOS setup).

2. Search for IOMMU, VT-d, or "virtualization technology for directed IO" or whatever it may be called on your system. Turn on IOMMU.

3. Save and Exit BIOS and boot into Linux.

4. Once booted into Linux, open a terminal window and check the following:

On AMD machines use:

Code: Select all

dmesg | grep AMD-Vi
The output should be similar to this:
...
AMD-Vi: Enabling IOMMU at 0000:00:00.2 cap 0x40
AMD-Vi: Lazy IO/TLB flushing enabled
AMD-Vi: Initialized for Passthrough Mode
...

On Intel machines use:

Code: Select all

dmesg | grep -e DMAR -e IOMMU
The output should be similar to this:
  • ...
    [ 0.000000] Intel-IOMMU: enabled
    [ 0.064548] dmar: IOMMU 0: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0462 ecap f020fe
    [ 0.064642] IOAPIC id 0 under DRHD base 0xfbffc000 IOMMU 0
    ...
If you do not get this output, then VT-d or AMD-V is not working - you need to fix that before you continue! Most likely it means that your hardware (CPU) doesn't support IOMMU, in which case there is no point continuing this tutorial.

In addition to a CPU and motherboard that supports IOMMU, you need two graphics processors (GPU):

1. One GPU for your Linux host (the OS you are currently running, I hope);

2. One GPU (graphics card) for your Windows guest.

Yes, performance comes at a cost, literally! We are building a system that runs two operating systems at the same time. Many resources like disk space, memory, etc. can be switched forth and back between the host and the guest, as needed. Unfortunately the GPU cannot be switched or shared between the two OS (work is being done to overcome this shortcoming).

If, like me, you use Linux for the everyday stuff such as emails, web browsing, documents, etc., and Windows for gaming, photo or video editing, then you'll have to give Windows a more powerful GPU, while Linux will run happily with an inexpensive GPU, or the integrated graphics processor (IGP). However please note that some newer Intel IGP may pose problems. If you have one of those and want to use it, see http://vfio.blogspot.co.il/2014/08/vfiovga-faq.html, question 3 about the i915 VGA arbiter patch (i915 is the Intel Linux driver for their IGPs).

In this tutorial I use UEFI to boot the Windows VM. That means that the graphics card you are going to use for the Windows guest must support UEFI (most newer cards do). To check if it does, go to https://www.kraxel.org/repos/jenkins/edk2/ and download the latest edk2.git-tools... package. Open it, go to /./usr/bin/ and extract the EfiRom binary inside.

Open a root terminal (or use sudo -i), then:

Code: Select all

lspci | grep VGA
to get the PCI bus of the graphics card you want to pass through (see Part 3 for more information). Then enter:

Code: Select all

cd /sys/bus/pci/devices/0000:02:00.0/
Replace "02:00.0" to the PCI bus you found above.

Now enter:

Code: Select all

echo 1 > rom

Code: Select all

cat rom > /tmp/image.rom

Code: Select all

echo 0 > rom
Go to the directory where you extracted the EfiRom binary, then run:

Code: Select all

./EfiRom -d /tmp/image.rom
  • Image 1 -- Offset 0x0
    ROM header contents
    Signature 0xAA55
    PCIR offset 0x01A0
    Signature PCIR
    Vendor ID 0x10DE
    Device ID 0x13C2
    ...
    Code type 0x03 (EFI image)
    EFI ROM header contents
    EFI Signature 0x0EF1

    Compression Type 0x0001 (compressed)
    Machine type 0x8664 (unknown)
    Subsystem 0x000B (EFI boot service driver)
    EFI image offset 0x0050 (@0xF450)
Above is the output for my Nvidia GTX 970. Yes, it has a UEFI BIOS, so I'm good to go.
Thanks to Alex Williamson! For (much) more information, see his blog here: http://vfio.blogspot.com.au/2014/08/doe ... t-efi.html.


Part 2 - Installing Qemu / KVM

If the graphics card you want to use with Windows is an Nvidia card (with the exception of some high-end Quadro, Tesla, etc. "professional" cards), you need a newer version of Qemu than the one currently found in the Linux Mint 17.3 / Ubuntu 14.04 repositories. To install Qemu 2.1.2 (as of this writing) you can add the following PPA (warning: this is not an official repository - use at your own risk). At the terminal prompt, enter:

Code: Select all

sudo add-apt-repository ppa:jacob/virtualisation
Now we install kvm, qemu, and some other stuff we need or want:

Code: Select all

sudo apt-get update
sudo apt-get install qemu-kvm seabios qemu-utils hugepages
(For those who wonder: "kvm" stands for "kernel virtual machine", "qemu" stands for "quick emulator".)


Part 3 - Determining the Devices to Pass Through to Windows

Now we need to find the PCI ID(s) of the graphics card and perhaps other devices we want to pass through to the Windows VM. Normally the IGP (the GPU inside the processor) will be used for Linux, and the discrete graphics card for the Windows guest. My CPU does not have an integrated GPU, so I use 2 graphics cards. Here my hardware setup:

* GPU for Linux: Nvidia Quatro 2000 residing in the first PCIe graphics card slot.
* GPU for Windows: Nvidia GTX 970 residing in the second PCIe graphics card slot.

To determine the PCI IDs, enter:

Code: Select all

lspci | grep VGA
Here is my system output:
  • 01:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
    02:00.0 VGA compatible controller: NVIDIA Corporation Device 13c2 (rev a1)
The first card under 01:00.0 is the Quadro 2000 I want to use for the Linux host. The other card under 02:00.0 I want to pass to Windows.

Modern graphics cards usually come with an on-board audio controller, which we need to pass through as well. To find its ID, enter:

Code: Select all

lspci -nn | grep 02:00.
Substitute "02:00." with the bus number of the graphics card you wish to pass to Windows, without the trailing "0". Here my system output:
  • 02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:13c2] (rev a1)
    02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbb] (rev a1)
Write down the bus numbers (02:00.0 and 02:00.1 above), as well as the PCI IDs (10de:13c2 and 10de:0fbb in the example above).

Next step is to find the mouse and keyboard (USB devices) that we want to assign to the Windows guest. Remember, we are going to run 2 independent operating systems side by side, and we control them via mouse and keyboard.

*********************************************************************************************************************************************************************************
Side note about keyboard and mouse:
Depending whether and how much control you want to have over each system, there are different approaches:
1. Get a USB-KVM (Keyboard/VGA/Mouse) switch. This is a small hardware device with usually 2 USB ports for keyboard and mouse as well as a VGA or (the more expensive) DVI or HDMI graphics outputs. In addition the USB-KVM switch has two USB cables and 2 VGA/DVI/HDMI cables to connect to two different PCs. Since we run 2 virtual PCs on one single system, this is viable solution.
Advantages:
- Works without special software in the OS, just the usual mouse and keyboard drivers;
- Best in performance - no software overhead.
Disadvantages:
- Requires extra (though inexpensive) hardware;
- More cable clutter and another box with cables on your desk;
- Requires you to press a button to switch between host and guest and vice versa;
- Need to pass through a USB port or controller - see below on IOMMU groups.

2. Without spending a nickel you can simply pass through your mouse and keyboard when the VM starts.
Advantages:
- Easy to implement;
- No money to invest;
- Good solution for setting up Windows and preparing it for solution no. 3.
Disadvantages:
- Once the guest starts, your mouse and keyboard only control that guest, not the host. You will have to plug them into another port to gain access to the host.

3. Synergy (http://symless.com/synergy/) is a commercial software solution that, once installed and configured, allows you to interact with two PCs or virtual machines.
Advantages:
- Most versatile solution, especially with dual screens;
- Software only, easy to configure;
- No hardware purchase required.
Disadvantages:
- Requires the installation of software on both the host and the guest;
- Doesn't work during Windows installation (see option 2);
- Costs $10 for a Basic, lifetime license;
- May produce lag, although I doubt you'll notice unless there is something wrong with the bridge configuration.

4. "Multi-device" bluetooth keyboard/mouse that can connect to two different devices and switch between them at the press of a button (see for example here):
Advantages:
- Most convenient solution;
- Theoretically same performance as option 1.
Disadvantages:
- Price;
- I haven't tested it, though other users report success with some such devices. Make sure the device supports Linux, or that you can return it if it doesn't!

I chose option 1 for robustness and universality, but both Synergy and option 4 are very, very tempting!

*********************************************************************************************************************************************************************************

For the VM installation we choose option 2 (see side note), that is we pass our keyboard and mouse through to the Windows VM. To do so, we need to identify their USB ID:

Code: Select all

lsusb
Here my system output (truncated):
  • ...
    Bus 010 Device 006: ID 045e:076c Microsoft Corp. Comfort Mouse 4500
    Bus 010 Device 005: ID 045e:0750 Microsoft Corp. Wired Keyboard 600
    ...
Note down the IDs: 045e:076c and 045e:0750 in my case.


Part 4 - Prepare for Passthrough

If you have two identical graphics cards, see the following post: viewtopic.php?f=231&t=212692&start=40#p1173262

With the PCI IDs in hand (or on paper), edit the /etc/default/grub file (you need root permission to do so). Here is mine before the edit:
  • GRUB_DEFAULT=0
    #GRUB_HIDDEN_TIMEOUT=10
    #GRUB_HIDDEN_TIMEOUT_QUIET=true
    GRUB_TIMEOUT_STYLE=countdown
    GRUB_TIMEOUT=0
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset"
    GRUB_CMDLINE_LINUX=""
The entry we are looking at is "GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset"". Depending on your hardware, add the following to this line:

Intel CPU:

Code: Select all

intel_iommu=on
AMD CPU:

Code: Select all

amd_iommu=on
Nvidia graphics card for Windows VM:

Code: Select all

nouveau.modeset=0
Note: This stops the Nouveau driver grabbing the card before the pci-stub driver is assigned (see below). If you run two Nvidia cards and use the open Nouveau driver for your Linux host, DON'T blacklist the driver!!! Chances are the tutorial will work since the pci-stub driver should grab the graphics card before nouveau takes control of it.

AMD graphics card for Windows VM:

Code: Select all

radeon.modeset=0
See note above under Nvidia graphics card... the same goes for the radeon driver.

In order to make the graphics card available to the Windows VM, we will assign a "dummy" driver as a place holder: pci-stub. To do so, we add the following to the GRUB_CMDLINE_LINUX_DEFAULT= line:

Code: Select all

pci-stub.ids=10de:13c2,10de:0fbb
Replace the PCI IDs above with the ones you wrote down!

My /etc/default/grub file now looks like this:
  • GRUB_DEFAULT=0
    #GRUB_HIDDEN_TIMEOUT=10
    #GRUB_HIDDEN_TIMEOUT_QUIET=true
    GRUB_TIMEOUT_STYLE=countdown
    GRUB_TIMEOUT=0
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT="nouveau.modeset=0 quiet nomodeset intel_iommu=on pci-stub.ids=10de:13c2,10de:0fbb"
    GRUB_CMDLINE_LINUX=""
We need to run update-grub:

Code: Select all

sudo update-grub
To load the pci-stub module at boot, edit the /etc/initramfs-tools/modules file and add:

Code: Select all

pci-stub
To update the initramfs, enter at the command line:

Code: Select all

sudo update-initramfs -u
Next we add the following drivers to our /etc/modules file:

Code: Select all

vfio
vfio_iommu_type1
vfio_pci
vhost-net
Some applications like Passmark require the following option:

Code: Select all

sudo echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf

Part 5 - Set up hugepages

This step is not required to run the Windows VM, but helps improve performance. First thing we need is to decide how much memory we want to give to Windows. Here my suggestions:

1. No less than 4GB.
2. If you got 16GB total and aren't running multiple VMs, give Windows 8GB-12GB, depending on what you plan to do with Windows.

For this tutorial I use 8GB. Let's see what we got:

Code: Select all

hugeadm --explain
If you get
  • hugeadm:ERROR: No hugetlbfs mount points found
then you need to enable hugepages. To do so, edit /etc/default/qemu-kvm as root and add or uncomment:

Code: Select all

KVM_HUGEPAGES=1
Reboot!

After the reboot, in a terminal window, enter again:

Code: Select all

hugeadm --explain
  • Total System Memory: 32180 MB

    Mount Point Options
    /run/hugepages/kvm rw,relatime,mode=775,gid=126

    Huge page pools:
    Size Minimum Current Maximum Default
    2097152 0 0 0 *
    ...
As you can see, hugepages are now mounted to /run/hugepages/kvm, and the hugepage size is 2097152 Bytes/(1024*1024)=2MB.

Another way to determine the hugepage size is:

Code: Select all

grep "Hugepagesize:" /proc/meminfo
  • Hugepagesize: 2048 kB
Here some math:
We want to reserve 8GB for Windows:
8GB = 8x1024MB = 8192MB

Our hugepage size is 2MB, so we need to reserve:
8192MB/2MB = 4096 hugepages

We need to add some % extra space for overhead (some say 2%, some say 10%), to be on the safe side I use 4500 (if memory is scarce, you should be OK with 4300, perhaps less - see comments under http://www.linux-kvm.com/content/get-pe ... -hugetlbfs).

To configure the hugepage pool, open /etc/sysctl.conf as root and add:

Code: Select all

# Set hugetables / hugepages for KVM single guest using 8GB RAM
vm.nr_hugepages = 4500
Now reboot.

After the reboot, run in a terminal:

Code: Select all

hugeadm --explain
  • Total System Memory: 32180 MB

    Mount Point Options
    /run/hugepages/kvm rw,relatime,mode=775,gid=126

    Huge page pools:
    Size Minimum Current Maximum Default
    2097152 4500 4500 4500 *

    Huge page sizes with configured pools:
    2097152

    A /proc/sys/kernel/shmmax value of 9223372036854775807 bytes may be sub-optimal. To maximise
    shared memory usage, this should be set to the size of the largest shared memory
    segment size you want to be able to use. Alternatively, set it to a size matching
    the maximum possible allocation size of all huge pages. This can be done
    automatically, using the --set-recommended-shmmax option.

    The recommended shmmax for your currently allocated huge pages is 9437184000 bytes.
    To make shmmax settings persistent, add the following line to /etc/sysctl.conf:
    kernel.shmmax = 9437184000
Note the sub-optimal shmmax value. We fix it temporarily with:

Code: Select all

sudo hugeadm --set-recommended-shmmax
And permanently by adding the following line to /etc/sysctl.conf:

Code: Select all

kernel.shmmax = 9437184000
Note: If you chose a different hugepage size, use the number recommended by hugeadm --explain.


Part 6 - Download OVMF BIOS and VFIO drivers

I chose to use UEFI to boot the Windows VM. There are some advantages to it, namely it starts faster and overcomes some limitations associated with legacy boot (Seabios).

First install OVMF:

Code: Select all

sudo apt-get install ovmf
This will download the current Ubuntu version and create the necessary links and directories. But that OVMF version is pretty outdated, so we get the latest and greatest from here:
https://www.kraxel.org/repos/jenkins/edk2/

Download the latest edk2.git-ovmf-x64 file, in my case it was "edk2.git-ovmf-x64-0-20151231.b1410.gb331b99.noarch.rpm" (I hope you are NOT running a 32-bit system). (NOTE: In case these OVMF files do not allow you to boot Windows (when you reach Part 8 below), you can download alternative OVMF files from here: http://www.ubuntuupdates.org/package/co ... /base/ovmf)

Open the downloaded .rpm file with root privileges and unpack to /. Let's check that all the files are in place:

Code: Select all

ls /usr/share/edk2.git/ovmf-x64/
  • OVMF_CODE-pure-efi.fd OVMF_VARS-pure-efi.fd UefiShell.iso
    OVMF_CODE-with-csm.fd OVMF_VARS-with-csm.fd
    OVMF-pure-efi.fd OVMF-with-csm.fd
Copy the OVMF_pure_efi.fd to /usr/share/ovmf:

Code: Select all

sudo cp /usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd /usr/share/ovmf/OVMF.fd
Now download the VFIO driver ISO to be used with the Windows installation from https://fedoraproject.org/wiki/Windows_Virtio_Drivers. Below are the direct links to the ISO images:

Latest VIRTIO drivers: https://fedorapeople.org/groups/virt/vi ... io-win.iso

Stable VIRTIO drivers: https://fedorapeople.org/groups/virt/vi ... io-win.iso

I chose the latest drivers.


Part 7 - Prepare disk and create startup script

We need some disk space on which to install the Windows VM. There are several choices:

1. Create a raw image file.
Advantages:
- Easy to implement;
- Flexible - the file can grow with your requirements;
- Snapshots;
- Easy migration;
- Good performance.
Disadvantages:
- Takes up the entire space you specify;
- Not sure how to access the file from the Linux host. (Note: I haven't tried this yet!)

2. Create a dedicated LVM volume for the Windows VM.
Advantages:
- Familiar technology (at least to me);
- Bare-metal performance (well, as close as it gets);
- Flexible - you can add physical drives to increase the volume size;
- Snapshots;
- Mountable within Linux host using kpartx.
Disadvantages:
- Takes up the entire space specified;
- Not sure about migration.

3. Pass through a PCI SATA controller / disk.
Advantages:
- Best performance, using original Windows drivers;
- Allows the use of Windows virtual drive features;
- Possibility to boot Windows directly, i.e. not as VM (haven't tried it).
Disadvantages:
- Host has no access to disk while VM is running;
- Requires dedicated SATA controller and drive(s) (all members of the IOMMU group need to be passed through);
- Offers less flexibility.

For further information on these and other image options, see here: https://en.wikibooks.org/wiki/QEMU/Images

Although I'm using an LVM volume, I suggest you start with the raw image. Let's create a raw disk image:

Code: Select all

fallocate -l 50G /media/user/win.img
Note: Adjust size and path to match your needs and actual resources.


Before we start the VM, it's best to check that we got everything:

KVM:

Code: Select all

kvm-ok
  • INFO: /dev/kvm exists
    KVM acceleration can be used
KVM module:

Code: Select all

lsmod | grep kvm
  • kvm_intel 151552 0
    kvm 479232 1 kvm_intel
Above is the output for the Intel module.

VFIO:

Code: Select all

lsmod | grep vfio
  • vfio_pci 36864 0
    vfio_iommu_type1 20480 0
    vfio 28672 2 vfio_iommu_type1,vfio_pci
QEMU:

Code: Select all

qemu-system-x86_64 --version
If you use an Nvidia graphics card for your Windows VM, you need QEMU emulator version 2.1 or newer (see above).

Did pci-stub load and bind the graphics card?

Code: Select all

dmesg | grep pci-stub
  • [ 3.576866] pci-stub: add 10DE:13C2 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
    [ 3.576876] pci-stub 0000:02:00.0: claimed by stub
    [ 3.576883] pci-stub: add 10DE:0FBB sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
    [ 3.576888] pci-stub 0000:02:00.1: claimed by stub
It worked!

Interrupt remapping:

Code: Select all

dmesg | grep VFIO
  • [ 5.204769] VFIO - User Level meta-driver version: 0.3
All good!

If you get this message:
  • vfio_iommu_type1_attach_group: No interrupt remapping support. Use the module param "allow_unsafe_interrupts" to enable VFIO IOMMU support on this platform
enter the following command in a root terminal (or use sudo -i):

Code: Select all

echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/vfio_iommu_type1.conf
In this case you need to reboot once more.

Go to /etc/ and create the file vfio-pci.cfg. Enter the PCI bus number for the graphics card you wish to pass through to Windows without blank lines!!! In my example the content is:

Code: Select all

0000:02:00.0
0000:02:00.1
Don't forget the leading "0000:".

We are almost done.

I've modified/created a script that will bind the graphics card to the vfio-pci driver and start the Windows VM. Copy the script below and safe it as windows10vm.sh (or whatever name you like, just keep the .sh extension):

Code: Select all

#!/bin/bash

configfile=/etc/vfio-pci.cfg
vmname="windows10vm"

vfiobind() {
	dev="$1"
        vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
        device=$(cat /sys/bus/pci/devices/$dev/device)
        if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
        fi
        echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
   
}


if ps -A | grep -q $vmname; then
	echo "$vmname is already running." &
	exit 1

else

	cat $configfile | while read line;do
	echo $line | grep ^# >/dev/null 2>&1 && continue
		vfiobind $line
	done

# use pulseaudio
export QEMU_AUDIO_DRV=pa

cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd

qemu-system-x86_64 \
  -name $vmname,process=$vmname \
  -machine type=q35,accel=kvm \
  -cpu host,kvm=off \
  -smp 4,sockets=1,cores=2,threads=2 \
  -enable-kvm \
  -m 4G \
  -mem-path /run/hugepages/kvm \
  -mem-prealloc \
  -balloon none \
  -rtc clock=host,base=localtime \
  -vga none \
  -nographic \
  -serial none \
  -parallel none \
  -soundhw hda \
  -usb -usbdevice host:045e:076c -usbdevice host:045e:0750 \
  -device vfio-pci,host=02:00.0,multifunction=on \
  -device vfio-pci,host=02:00.1 \
  -drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
  -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
  -boot order=dc \
  -device virtio-scsi-pci,id=scsi \
  -drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img \
  -drive file=/home/user/ISOs/win10.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \
  -drive file=/home/user/Downloads/virtio-win-0.1.112.iso,id=virtiocd,format=raw,if=none -device ide-cd,bus=ide.1,drive=virtiocd \
  -netdev type=tap,id=net0,ifname=tap0,vhost=on \
  -device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01

	exit 0
fi
Make the file executable:

Code: Select all

sudo chmod +x windows10vm.sh
You need to edit the file and change the settings and paths to match your CPU and configuration. See below for explanations on the qemu-system-x86 options:

-name $vmname,process=$vmname
Name and process name of the VM. The process name is displayed when using ps -A to show all processes, and used in the script to determine if the VM is already running. Don't use win10 as process name, for some inexplicable reason it doesn't work!

-machine type=q35,accel=kvm
This specifies a machine to emulate. The command option is not required to install or run Windows, but in my case it improved SSD read and write speed. See https://wiki.archlinux.org/index.php/QE ... too_slowly

-cpu host,kvm=off
This tells qemu to emulate the host's exact CPU. There are more options, but it's best to stay with "host".
The kvm=off option is needed for Nvidia graphics cards - if you have an AMD/Radeon card for your Windows guest, you can remove that option and specify "-cpu host". Another option is to patch the Nvidia driver :D - see viewtopic.php?f=231&t=229122.

-smp 4,sockets=1,cores=2,threads=2
This specifies multiprocessing. "-smp 4" tells the system to use 4 (virtual) processors. My CPU has 6 cores, each supporting 2 threads, which makes a total of 12 threads. You cannot assign all CPU resources to the Windows VM - the host also needs some resources (remember that some of processing and I/O coming from the guest takes up CPU resources in the host). In the above example I gave Windows 4 virtual processors. "sockets=1" specifies the number of actual CPUs sockets qemu should assign, cores=2 tells qemu to assign 2 processor cores to the VM, and "threads=2" specifies 2 threads per core. It may be enough to simply specify "-smp 4", but I'm not sure about the performance consequences (if any).
If you have a 4-core Intel CPU, you can specify "-smp 6,sockets=1,cores=3,threads=2" to assign 75% of your CPU resources to the Windows VM. This should usually be enough even for demanding games.

-enable-kvm
Important: This enables kvm. If you do not set this option, the Windows guest will run in qemu emulation mode and run slow. Whatever happens, don't remove the line.

-m 4G
The -m option assigns memory (RAM) to the VM, in this case 4GByte. Same as "-m 4096". You can increase or decrease it, depending on your resources and needs. With modern Windows releases it doesn't make sense to give it less than 4G, unless you are really stretched with RAM.

-mem-path /run/hugepages/kvm
This tells qemu where to find the hugepages we reserved. If you haven't configured hugepages, you need to remove this option.

-mem-prealloc
Preallocates the memory we assigned to the VM.

-balloon none
We don't want memory ballooning (as far as I know Windows won't support it anyway).

-rtc clock=host,base=localtime
"-rtc clock=host" tells qemu to use the host clock for synchronization. "base=localtime" allows the Windows guest to use the local time from the host system. Another option is "utc".

-vga none
Disables the built in graphics card emulation. You can remove this option for debugging.

-nographic
Totally disables SDL graphical output. For debugging purposes, remove this option if you don't get to the Tiano Core screen.

-serial none
-parallel none

Disable serial and parallel interfaces.

-soundhw hda
Together with the "export QEMU_AUDIO_DRV=pa" shell command, this option enables sound through PulseAudio.

-usb -usbdevice host:045e:076c -usbdevice host:045e:0750
"-usb" enables USB support and "-usbdevice host:..." assigns the USB host devices mouse (045e:076c) and keyboard (045e:0750) to the guest.

-device vfio-pci,host=02:00.0,multifunction=on
-device vfio-pci,host=02:00.1
We pass the second graphics card 02:00.0 to the guest, using vfio-pci. It is a multifunction device (graphics and sound). Make sure to pass through both the video and the sound part (02:00.1 in my case).

-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd
Specifies the location and format of the bootable OVMF UEFI file. This file doesn't contain the variables, which are loaded separately (see right below).

-drive if=pflash,format=raw,file=/tmp/my_vars.fd
These are the variables for the UEFI boot file, which were copied by the script to /tmp/my_vars.fd.

-boot order=dc
Start boot from CD (d), then first hard disk (c). After installation of Windows you can remove the "d" to boot straight from HD.

-device virtio-scsi-pci,id=scsi
Load driver virtio-scsi-pci. This paravirtualized driver substantially improves disk I/O.

-drive id=disk0,if=virtio,cache=none,format=raw,file=/media/user/win.img
Defines the first hard disk. It will be accessed as a paravirtualized (if=virtio) drive in raw format.
Important: file=/... enter the path to your previously created win.img file.
Other options are file=/dev/mapper/group-vol for LVM volumes, or file=/dev/sdx1 for disks or partitions

-drive file=/home/user/ISOs/win10.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd
This attaches the Windows win10.iso as CD or DVD. The driver used is the scsi-cd driver.
Important: file=/... enter the path to your Windows ISO image.
Note: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.

-drive file=/home/user/Downloads/virtio-win-0.1.112.iso,id=virtiocd,format=raw,if=none -device ide-cd,bus=ide.1,drive=virtiocd
This attaches the virtio ISO image as CD onto the ide.1 bus. The driver used is the ide-cd driver.
Important: file=/... enter the path to your virtio ISO image. If you downloaded it to the default location, it should be in your Downloads directory.
Note: There are many ways to attach ISO images or drives and invoke drivers. My system didn't want to take a second scsi-cd device, so this option did the job. Unless this doesn't work for you, don't change.
Note 2: This option is only needed during installation. Afterwards, copy the line to the end of the file and comment it out with #.

-netdev type=tap,id=net0,ifname=tap0,vhost=on
-device virtio-net-pci,netdev=net0,mac=00:16:3e:00:01:01
Defines the network and network driver.
See here for some information: http://www.linux-kvm.com/content/how-ma ... -vhost-net
For performance tests, see http://www.linux-kvm.org/page/10G_NIC_p ... _vs_virtio

For a good explanation on the qemu-system-x64 options, see https://www.suse.com/documentation/sles ... vices.html and https://wiki.archlinux.org/index.php/QEMU.


Part 8 - Install Windows

Start the VM by running the script as root:

Code: Select all

sudo ./windows10vm.sh
(Make sure you specify the correct path.)

You should get a Tiano Core splash screen with the memory test result.

Then the Windows ISO starts to boot and asks you to:
  • Press any key to start the CD / DVD...
Press a key!

Windows will then ask you to:
  • Select the driver to install
Click "Browse", then select your VFIO ISO image and go to "viostor", open and select your Windows version, then select the "AMD64" version, click OK.

You will be prompted again to select a driver, now go to "vioscsi" and select again the AMD64 version for your Windows release.

Windows will ask for the license key, and you need to specify how to install - choose "Custom". Then select your drive (there should be only disk0) and install.

Windows will reboot several times. When done rebooting, open Device Manager and select the Network interface. Right-click and select update. Then browse to the VFIO disk and install NetKVM.

Windows should be looking for a display driver by itself. If not, install it manually.

Note: In my case, Windows did not correctly detect my drives being SSD drives. Not only will Windows 10 perform unnecessary disk optimization tasks, but these "optimizations" can actually lead to reduced SSD life and performance issues. To make Windows 10 determine the correct disk drive type, do the following:

1. Inside Windows 10, right-click the Start menu.
2. Select "Command prompt (admin)".
3. At the command prompt, run:

Code: Select all

winsat formal
4. It will run a while and then print the Windows Experience Index (WEI).
5. To display the WEI, press WIN+R and enter:

Code: Select all

shell:Games
You get the following screen:
shell_Games.png
Please publish your WEI here.

To check that Windows correctly identified your SSD:
1. Open Explorer
2. Click "This PC" in the left tab.
3. Right-click your drive (e.g. C:) and select "Properties".
4. Select the "Tools" tab.
5. Click "Optimize"
You should see something similar to this:
optimize_drives.png
In my case, I have drive C: (my Windows 10 system partition) and a "Recovery" partition located on an SSD, the other two partitions ("photos" and "raw_photos") are using regular hard drives (HDD). Notice the "Optimization not available" :D .


By now you should have a working Windows VM with VGA passthrough. Please install and run Passmark 8 and post the results here: http://forums.linuxmint.com/viewtopic.p ... 5&t=153482


Part 9 - To-do

Run VM in user mode !!! (non-root)


Part 10 - Passing more PCI devices to guest

If you wish to pass additional PCI devices through to your Windows guest, you must make sure that you pass through all PCI devices residing under the same IOMMU group. To check this, use the following command:

Code: Select all

find /sys/kernel/iommu_groups/ -type l
The output on my system is:

Code: Select all

/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/4/devices/0000:00:05.0
/sys/kernel/iommu_groups/4/devices/0000:00:05.2
/sys/kernel/iommu_groups/4/devices/0000:00:05.4
/sys/kernel/iommu_groups/5/devices/0000:00:11.0
/sys/kernel/iommu_groups/6/devices/0000:00:16.0
/sys/kernel/iommu_groups/7/devices/0000:00:19.0
/sys/kernel/iommu_groups/8/devices/0000:00:1a.0
/sys/kernel/iommu_groups/9/devices/0000:00:1c.0
/sys/kernel/iommu_groups/10/devices/0000:00:1c.1
/sys/kernel/iommu_groups/11/devices/0000:00:1c.2
/sys/kernel/iommu_groups/12/devices/0000:00:1c.3
/sys/kernel/iommu_groups/13/devices/0000:00:1c.4
/sys/kernel/iommu_groups/14/devices/0000:00:1c.7
/sys/kernel/iommu_groups/15/devices/0000:00:1d.0
/sys/kernel/iommu_groups/16/devices/0000:00:1e.0
/sys/kernel/iommu_groups/17/devices/0000:00:1f.0
/sys/kernel/iommu_groups/17/devices/0000:00:1f.2
/sys/kernel/iommu_groups/17/devices/0000:00:1f.3
/sys/kernel/iommu_groups/18/devices/0000:01:00.0
/sys/kernel/iommu_groups/18/devices/0000:01:00.1
/sys/kernel/iommu_groups/19/devices/0000:02:00.0
/sys/kernel/iommu_groups/19/devices/0000:02:00.1
/sys/kernel/iommu_groups/20/devices/0000:05:00.0
/sys/kernel/iommu_groups/20/devices/0000:06:04.0
...
As you can see in the above list, some IOMMU groups contain multiple devices on the PCI bus. I wanted to see which devices are in IOMMU group 17 and used the PCI bus ID:

Code: Select all

lspci -nn | grep 00:1f.
Here is what I got:
  • 00:1f.0 ISA bridge [0601]: Intel Corporation C600/X79 series chipset LPC Controller [8086:1d41] (rev 05)
    00:1f.2 SATA controller [0106]: Intel Corporation C600/X79 series chipset 6-Port SATA AHCI Controller [8086:1d02] (rev 05)
    00:1f.3 SMBus [0c05]: Intel Corporation C600/X79 series chipset SMBus Host Controller [8086:1d22] (rev 05)
As you can see, it would have been a bad idea to pass any of them through to the guest, as they are used by my Linux host.


Part 11 - References

https://bbs.archlinux.org/viewtopic.php?id=162768 - this gave me the inspiration - the best thread on kvm!
http://ubuntuforums.org/showthread.php?t=2266916 - Ubuntu tutorial.
https://wiki.archlinux.org/index.php/QEMU - Arch Linux documentation on QEMU - by far the best.
http://vfio.blogspot.co.il/2014/08/vfiovga-faq.html - one of the developers, Alex provides invaluable information and advice.
http://www.linux-kvm.org/page/Tuning_KVM - Redhat is the key developer of kvm, their website has lots of information.
https://wiki.archlinux.org/index.php/KVM - Arch Linux KVM page.
https://wiki.archlinux.org/index.php/PC ... h_via_OVMF - PCI passthrough via OVMF tutorial for Arch Linux.
https://www.suse.com/documentation/sles ... k_kvm.html - Suse Linux documentation on KVM - good reference.
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

nathantreid
Level 1
Level 1
Posts: 3
Joined: Sat Sep 17, 2016 4:32 pm

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by nathantreid » Sat Sep 17, 2016 4:56 pm

I've been working through this tutorial using Ubuntu 16.04, and I can't get vfio-pci to load.

If anyone has ideas, please let me know - I'm just bashing my head against the wall at this point. :)
Thanks!

Updated line in grub file:

Code: Select all

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on nomodeset nouveau.modeset=0"
Here's what I get when I check vfio:

Code: Select all

root@mr-host:~# lsmod | grep vfio
vfio_virqfd            16384  0
vfio_iommu_type1       20480  0
vfio                   28672  1 vfio_iommu_type1

root@mr-host:~# dmesg | grep -i vfio
[    2.413765] VFIO - User Level meta-driver version: 0.3
Contents of /etc/initramfs-tools/modules:

Code: Select all

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
vhost-net
Contents of /etc/modprobe.d/local.conf:

Code: Select all

install vfio-pci /sbin/vfio-pci-override-vga.sh
Contents of /sbin/vfio-pci-override-vga.sh:

Code: Select all

#!/bin/sh

DEVS="0000:09:00.0 0000:09:00.1"

for DEV in $DEVS; do
    echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
done

modprobe -i vfio-pci
Output from lspci -nnk:

Code: Select all

09:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1b81] (rev a1)
        Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:3302]
        Kernel modules: nvidiafb, nouveau
09:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10f0] (rev a1)
        Subsystem: Micro-Star International Co., Ltd. [MSI] Device [1462:3302]
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
Asrock X99 WS-E, i7 6950X, 4x16GB Crucial DDR4, MSI GTX1070, Corsair RM650 PSU

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Sat Sep 17, 2016 10:16 pm

Have you read my revised instructions (made some changes /additions yesterday, particularly also the startup script)?

It seems that the vfio-pic driver doesn't bind to your graphics card when you boot. Check my new troubleshooting section - I had a similar issue and described how to solve that.

Please also post your Cpu model. You have checked that it supports IOMMU?
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

nathantreid
Level 1
Level 1
Posts: 3
Joined: Sat Sep 17, 2016 4:32 pm

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by nathantreid » Sat Sep 17, 2016 10:25 pm

Thanks - I got it working, unfortunately I'm not quite sure what fixed it. The 3 changes I made were to recreate the local.conf and vfio-pci-override-vga.sh, and to run

Code: Select all

chmod 755 /sbin/vfio-pci-override-vga.sh
. Previously I had used

Code: Select all

chmod +x /sbin/vfio-pci-override-vga.sh
This time after updating the initramfs and rebooting, vfio-pci loaded and I can now start the vm!
Asrock X99 WS-E, i7 6950X, 4x16GB Crucial DDR4, MSI GTX1070, Corsair RM650 PSU

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Sun Sep 18, 2016 11:50 pm

Good news!
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

nathantreid
Level 1
Level 1
Posts: 3
Joined: Sat Sep 17, 2016 4:32 pm

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by nathantreid » Tue Sep 20, 2016 4:14 pm

Thank you so much for this guide - it's the number one thing responsible for me getting a working set up.
My VM is working really well now, however I went through a bunch of testing while trying to figure out the performance issues I was having, and I'm now using virsh.
My orginal VM works great now too, but I've decided I prefer virsh given that:
  1. A lot of the tutorials and help I came across were virsh specific
  2. Using virsh I can still use qemu command line options
  3. Virsh makes it easy to handle additional control of your VMs (restart, shutdown, send keystrokes, etc
Some things I found out while trying to fix horrible performance (wildly varying CPU benchmarks, 38%-70% of bare metal):
  • Be sure that your CPU cores are set to performance mode on the host! This was the number one cause of poor performance for me.
    The single thread benchmark is around 2100 (baremetal) for my cpu. In the VM, I was getting scores between 800 and 1500. After switching from powersave to performance mode, I got a stable score around 1680.
  • Enable Hyper-V extensions! Adding these boosted me to a stable score of 1950. That's less than 10% less, and I couldn't find anything else to tune, so I decided to be happy with that.
I have an Nvidia graphics card, so to make them work with the Hyper-V extensions, I had to patch the drivers and disable driver signing. This comes with the annoyance of having to press F8 then 7 every time I restart in order to disable the driver signature enforcement. It's even more annoying because my main keyboard doesn't work until Windows boots, so I had to plug in a second keyboard just to press 2 keys. :x
Thankfully, virsh makes it easy to automate this (I'm sur ethere's a way if you aren't using virsh, I just haven't looked into it). I'm going to put my process here, in case someone finds it useful:
  1. Download latest Nvidia drivers and patch them: https://github.com/sk1080/nvidia-kvm-patcher
  2. Enable the F8 boot menu: bcdedit /set {current} bootmenupolicy Legacy
  3. Update your VM startup script to press the F8 and 7 keys for you
    Add the following after the command that starts your VM.

    Code: Select all

      # disable windows driver signature enforcement
      sleep 10
      virsh send-key windows10 KEY_F8
      sleep 1
      virsh send-key windows10 KEY_7
    
Asrock X99 WS-E, i7 6950X, 4x16GB Crucial DDR4, MSI GTX1070, Corsair RM650 PSU

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Fri Sep 23, 2016 6:28 am

@nathantreid: Thanks for your update and the input about virsh. I honestly admit that I haven't tried virsh, at least not for performance tuning.

About the Nvidia card: I've written a how-to to show how Nvidia can fix their bug - see viewtopic.php?f=231&t=229122.

Once you enable test mode, you do not need the keystrokes anymore, at least I don't.

Instead of patching the Nvidia driver, there is a command line option for the qemu command:

-cpu ... ,hv_vendor_id=12345678

If I'm not mistaken, this should work with the Linux Mint 18 / Ubuntu 16.04 Kernel, but I haven't had the time to try.
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

powerhouse
Level 6
Level 6
Posts: 1084
Joined: Thu May 03, 2012 3:54 am
Location: Israel
Contact:

Re: HOW-TO make dual-boot obsolete using kvm VGA passthrough

Post by powerhouse » Fri Sep 23, 2016 2:41 pm

OK, I've replaced the patched Nvidia 372.54 driver with an unpatched 372.90 driver. I had to change the -cpu option in my qemu startup command to this:

Code: Select all

-cpu host,kvm=off,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=whatever123 
The UserBenchmark results can be seen here: viewtopic.php?f=231&t=197754#p1218867

I still need to check if the "hv_vendor_id=whatever123" makes any difference, and whether or not the hypervisor extensions are actually enabled when using the kvm=off switch. Others have posted the above command line options, but I've also read that "kvm=off" turns off the hypervisor extensions. Unfortunately I haven't found a way to test for the hypervisor extensions being used.

Using

Code: Select all

coreinfo -v
in my Windows 10 VM, I get:
  • Coreinfo v3.31 - Dump information on system CPU and memory topology
    Copyright (C) 2008-2014 Mark Russinovich
    Sysinternals - www.sysinternals.com

    Note: Coreinfo must be executed on a system without a hypervisor running for
    accurate results.

    Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz
    Intel64 Family 6 Model 45 Stepping 7, GenuineIntel
    Microcode signature: 00000001
    HYPERVISOR * Hypervisor is present
    VMX * Supports Intel hardware-assisted virtualization
    EPT * Supports Intel extended page tables (SLAT)
Not sure what to make of that, but it seems a hypervisor is detected.

@nathantreid: Would you mind to post benchmarks? See my link above to the UserBenchmark thread.

I found that using the "performance" governor instead of the "powersave" governor adds some 500 points in the Passmark benchmark, that is from ~4,000 to 4,500. I still use the powersave governor as this is good enough for my purposes.

Do you mind sharing your virsh configuration?
Asus Sabertooth X79, i7 3930K CPU, 8x4GB Kingston DDR3, Noctua NH-D14 CPU cooler, GTX 970 + Quadro 2000 GPU, Asus Xonar Essence STX, Sandisk 120GB + Samsung EVO 860 1TB SSD + 4 HDD, Corsair 500R, SeaSonic 660W Gold X PS https://heiko-sieger.info

Post Reply

Return to “Virtualization”