Just some more comments on hardware and software issues i encountered while finalising my hardware setup.
HDMI sound stops working for no apparant reason when Intel IOMMU is turned on. There is a workaround but it breaks PCI passthrough for GPU.
This only affect setups that uses Intel Intergrated graphics and audio via HDMI for the host.
https://bugs.launchpad.net/ubuntu/+sour ... ug/1428121
Passing through Killer network adapters causes instability.
- 05:00.0 Ethernet controller: Qualcomm Atheros Killer E220x Gigabit Ethernet Controller (rev 13)
Regularly causes the host and guest to freeze. Only way to recover is to reboot the system
- 04:00.0 Network controller: Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter (rev 20)
This causes 0x000001d4 BSOD in the Windows Guest. It's a driver issue which is temporarily fixed by downloading an updated driver but windows 10 seems to overwrite is after an update and the issue returns.
The solution was to just remove it from my VM startup script.
I've read about problems with the Intel Integrated Graphics (IGD) and IOMMU, but using the OVMF (UEFI) method was supposed to take care of that. The reason I stuck to the OVMF method were the reports about needing to patch the kernel with the i915 VGA arbiter patch
together with the i915.enable_hd_vgaarb=1 option in GRUB. In the words of Alex Williamson (I believe he wrote the patch):
the i915 vga arbiter patch is necessary any time you have host Intel graphics and make use of the x-vga=on option
OK, so I take it that HDMI sound doesn't work for the Intel Integrated Graphics Device (IGD) running on the host. BUT, this might be related to your running a patched 4.4 kernel with the ACS patch. I also wonder if this bug has been addressed by a newer kernel?
About passing through network adapters: Back in the old days when I ran a Xen hypervisor I did some benchmarks. Eventually I got to a point where network speed was no issue anymore (I also optimized the SAMBA config file to improve the file transfer speed, see here
). I haven't done any network speed benchmarks using KVM, something to look into.
From practical experience, the way I'm running my Windows VM now I do not see a need to pass through a network card. The bridge and the tap interface that I created seems to work well and I haven't noticed any performance issues, yet. There are some configuration and performance tuning tips on the Internet, so if you encounter performance issues with a bridged network, you may want to try them first. See also http://www.linux-kvm.org/page/10G_NIC_p ... _vs_virtio
, https://www.ibm.com/support/knowledgece ... ostnet.htm
and http://blog.vmsplice.net/2011/09/qemu-i ... cture.html
In an earlier post you explained how you required UEFI boot to pass through your SATA controller. I hadn't commented on that since I can see some scenarios where that would make sense, if only for some improved disk performance. Besides, sharing your experiments here gives us valuable information, things I or many others wouldn't have discovered. So what I am writing below should NOT discourage you in your quests. It is simply my own perspective and/or experience:
1. Passing through a SATA controller: Passing through the SATA controller, the best performance you can hope for is "bare metal" performance. While some virtio VM configurations will give you a performance penalty, it often is so little that it won't matter in practice. As long as you don't use the default caching method "writethrough" which really cripples write performance. For an in-depth benchmark, see http://jrs-s.net/2013/05/17/kvm-io-benchmarking/
. The conclusion of this benchmark is:
... a ZFS zvol with ext4, qcow2 files, and writeback caching offers you the absolute best performance.
It's not the only benchmark around. I've done my own benchmarks using Passmark, which are documented here
Not discussed above is the iothread option (formerly x-data-plane), see https://www.reddit.com/r/VFIO/comments/ ... _settings/
. See also http://www.linux-kvm.org/page/Virtio/Block/Latency
, http://blog.vmsplice.net/2013/03/new-in ... irtio.html
and http://www.slideshare.net/pradeepkumars ... or-threads
There are many KVM options to tweak, let alone deciding upon the underlying partitions/file systems (raw, zfs, LVM, xfs,..) and whether or not to use qcow2 (preallocated or not?), whereas PCI passthrough seems to be straightforward. But PCI passthrough has its drawbacks. qcow2 for instance is easy to handle and to back up. I use LVM and a backup script to create a snapshot of the Windows VM, compress it and save it in a tgz file. There are a number of advantages to letting Linux handle the storage. But I'm sure you have your reasons why you chose to let Windows handle its storage media.
2. Passing through a network controller: This requires you to physically connect the NIC to an Ethernet switch or a router. Unless you got a real beast of a switch/router, the virtual bridge created with brctl and configured in the qemu startup script will most likely be faster.
I do not see Windows suitable for running network or server tasks that would require a separate NIC. However, passing NICs to a Linux guest is common practice. Possible applications are nearly endless: NAS, media streamer, firewall, demilitarized zone (DMZ), honeypots, servers of all kinds, you name it. But unless you are running your machine in an enterprise environment, I can't really see the need for NIC passthrough. I'm open to learn.
3. Passing through a USB controller: I actually use PCI passthrough to pass through 2 USB host controllers, a USB 2 and a USB 3 controller. My board has several USB controllers and I have a separate SATA/USB3 PCI card so I got plenty of USB ports. When running Windows I use a USB2 port for mouse and keyboard, a USB3 port to hook up an external HDD, another USB2 port for a dedicated photo printer, and often a USB3 card reader to read the memory cards from my cameras. Here I need speed, as these cards often hold several GByte worth of photos, and who wants to wait?
I wanted to list these points as I am sure that many users will not need to pass through any other devices aside of the VGA card.