powerhouse wrote:I never tried suspending dom0 and made sure it won't suspend. To me it doesn't make sense to suspend dom0. How would dom0 know what's going on in a domU, even if it was possible to suspend? To me it's good enough that Xen applies the frequency scaling for the CPU and power states for PCI devices to keep power consumption at a minimum.
Hmm.. For some reason I thought the config file had something like "on_suspend" but I was mistaken (you can suspend or save DomU's after all). I was in the habit of suspending my machine so it would be cooler/quieter at night (it's in my bedroom) and didn't want to close the running apps. Just being lazy again. Like I said I can live with shutting down my machine just need to get out of the habit. I'm honestly not as worried about the power consumption as I should be. But heat is not your friend in the desert. So forgive me for not adhering to what you think makes sense.
The article you mentioned has been criticized by some - including me. You are probably right to question the bias of the person who's done the testing and wrote the article. In another Phoronix test they were able to show some problem with Xen, but when a Xen developer wrote to Phoronix and suggested to apply available patches to fix the problems, the Phoronix people (Michael?) didn't even bother to reply.
That doesn't strike me as proper journalism, but that's assuming Phoronix is ran by proper journalists or enthusiasts/hackers? Actually don't know the answer to that. That said I have noticed that the Xen developers are awfully quick to suggest patch X.Y.Z. In addition it seems like they have a hard time getting their patches applied to the kernel. I'm not sure if there's politics involved here or what, but I get the feeling that there's friction for reasons unbeknownst to me. As an end user I'm not to thrilled about having to patch things to get them working, but sometimes you have to get your hands dirty to things to work.
I disagree with comparing HVM with HVM (Xen versus KVM) when running Linux guests. When benchmarking different solutions, one needs to make sure to choose the best possible option. Under Xen this would probably be a PV guest. In any case, the article doesn't go into details as to how they tested Xen. They may have used PV-HVM drivers, but it sure doesn't look like they used a PV guest. So I really don't know what that test is good for, except for confusing people or promoting RedHat (KVM is largely sponsored by RedHat, and their commercial virtualization product for enterprise customers has KVM at its heart).
lol. Well KVM does seem to be the youngest child and Xen the red headed stepchild. In any case perhaps they should have tested Xen in both modes then? Again it's not comparing apples and apples, but then at least you'd be contrasting the performance differences of the modes. What I detest is when the benchmarks themselves favor a vendor/solution. For example, when binaries are compiled to take advantage of vendor specific features. Whenever I see a site comparing media encoding times I have to wonder if they are only taking advantage of Intel multimedia CPU features, etc. Well, putting aside it's accuracy, the test is meaningful to the topic of this discussion as Windows guests can't run in PV mode.
Now, if I remember correctly, Xen is used by Amazon EC2, Rackspace, and a host of other cloud service providers. I am quite sure they keep an eye on performance issues, as this means big money for them. Now, KVM is the newcomer and still has to proof itself, which may prevent the big guys from switching camps at this stage. What I am trying to say is that I don't know whether Xen or KVM performs better, and under which circumstances, but the two or three Phoronix benchmark comparisons I've seen seem to contribute more to the confusion.
Yeah, Opensuse actually includes Kernel versions for Amazon EC2. lol. I don't disagree with you. These guys keep an eye on the performance prize. I was suggesting that they wouldn't be the ones swayed by two or three Phoronix benchmarks. If they are.. well.. that certainly doesn't bode well for the future of the "cloud". God, I hate that marketing term. "Cloud" > /dev/null.
On a practical side, I've read about significant performance differences between Xen and KVM by someone who tried both, who found KVM to be roughly 10x as fast as Xen for kernel compilations, yet others had performance issues with KVM (performance variations). To me that only shows that one needs to pay close attention to the Xen or KVM configuration, perhaps that Xen is a bit more finicky about it's setup, given its many options. I believe my Passmark results and the WEI I posted earlier in the thread show clearly what Xen is capable of. If you look at the Passmark I/O results for different storage types (SSD, striped LVM on 2 HDDs, regular LVM spanning on 2 HDDs), you will see that the results are very close to bare metal performance.
Yup, stick the "your experience may vary" sticker on the whole thing and call it good. I guess this is a landmind I shouldn't have brought up.
At any rate I do plan on confirming just how close to bare metal I've been able to get.