Don't mention "performance" in a conversation about virtualization. It is not trendy, it is archaic, it is part of the folkore. At least if you believe the gullible press and Analysts. Some of them seem to be all in a hurry to repeat the claims of the vendors.
 
We give you a small Anthology of  these "inspired" articles and discuss why they are naive.
 
HyperLove by ZDNet
 "We loved the near-native performance of Windows guests in Hyper-V", Jason Perlow ZDNet, Juli 1st 2008. Benchmark numbers to back this up? Not necessary.
 
Juicy detail: the same article claims:
"Back in February, we had a look at a late beta release, and we were quite impressed with the performance of the system".
 
Performance wise, Hyper-V was at that time pretty bad. When we tested back in April 2008 with RC-0, performance was in many cases not half of ESX 3.5 performance. But we could not publish this as this was RC-0 after all, and Microsoft admitted it had to improve this in the final version. Don't take our word for it, here is PCWorld:

"The most notable -- and the most significant -- change between the initial release candidate version (RC0) of Hyper-V and the RTM edition is better performance. Most of the performance work was done between RC0 and RC1, but not many people knew about it due to (a) not-so-wide a release and (b) a ban on performance testing by MS."
 
So although MS banned all performance testing on RC-0 and made large improvements on performance, some people still managed to be impressed by "the performance of Hyper-V" of a version older than RC-0?
 
Virtualization Tax myths and folklore
The Kusnetzky Group LLC feels that VMWare's benchmarking is a proof that hypervisor performance overhead is a myth: (July 22nd, 2008)
 
"Virtual machine overhead has been part of the folklore for quite some time. I’ve heard it described as the "virtualization tax” by those opposed to the use of this technology"
 
"In my view, successfully put it [Performance overhead] to bed".
 
The good old days that the press was supposed to make critical comments is over. You are either opposed to virtualization if you believe in virtualization overhead, or a folkloristic figure.
 
The Devil is in the details 
The performance figures of ESX 3.5 are without a doubt pretty impressive. We'll show you more in a few weeks. But simply assuming that (for example) "Applications having high I/O aren’t good virtualization candidates" is a performance myth because VMWare says so and shows performance numbers is just gullible.
 
The reality is that getting good performance out of of virtualized high I/O applications is possible, but it is pretty hard. If the "near native performance" claim was right and virtualization overhead was just a myth, why
  • Would AMD be touting their NPT/RVI technology? 
  • Would Intel be talking about a second generation of hardware accelerated virtualization (EPT in Nehalem)?
  • Would PCI SIG be hard at work at an I/O virtualization architecture?
Why do you think that we measured a performance increase ranging from 7 tot 30% with Nested Paging? The current virtualization solutions virtualize many applications with little performance overhead, but I/O intensive applications are still not among those.
 


Enter and exit VMM numbers (in ns) for the different Intel families.

Look at how much work Intel put into lowering in the hypervisor - Guest OS transitioning time with each CPU generation, and you'll understand that low virtualization performance overhead is only possible with the very latest CPUs. The moment your application needs large memory buffers which shrink and grow in time (like many big OLTP databases), does quite bit of network traffic or write a lot to your block devices, you will probably need an 
  • Intel Nehalem (not available yet) or AMD quadcore Opteron
  • Virtual I/O optimized NIC (such as Neterion's)
  • Other of the very latest hardware.
to get near native performance. And it doesn't end with having the latest hardware available. Is your database application 64 bit? Does the application and your OS support Async I/O? Is your database satisfied with 4 CPUs? Did you enable large pages? and so on. It is possible to get very good virtualization performance, but you problably sweat a bit and sometimes blood on it. That is the reason why many organisations are still keeping their OLTP databases on a native machine. It takes the right hardware, software and configuration combination to get the published VMWare numbers. And while in some cases very skilled people can make this happen, in a lot more cases you can not attain these ideal conditions.
 
The people at VMWare have the right to claim that I/O intensive applications can be virtualized at a relatively low performance cost (it is possible), but the task of analysts and press to find the snakes under the grass, and to talk about the "if"s and "but's". In my next post tomorrow I will show how some of the "near native performance" claims are simply very clever but completely useless and wrong in the real world. 
 
We love virtualization as it is lowering the cost of IT. But despite the fact that we are "not opposed to it", we'll do some independent testing before we believe VMWare, Xen and Microsoft. Part of the folklore here at Anandtech :-).
 
Comments Locked

16 Comments

View All Comments

  • gtrider - Tuesday, August 26, 2008 - link

    Virtualization of I/O intensive servers is *still* not a very good idea, but the technology is almost there...
    I say the caveat to that is on toy servers its not a good idea,
    system z is quite virtualized and by nature of the system tend to be io intensive. Its a shame techies overlook it.
  • zdzichu - Monday, August 18, 2008 - link

    Xen, ESX, Hyper-V. Don't forget about native Linux' KVM! With host drivers for virtio from Quamranet it should show it potential. Also available as ready-to-use appliance called oVirt.
  • yuhong - Sunday, August 17, 2008 - link

    The thing about low virtualization performance overhead only possible on the newest hardware can be turned around and be used to help sell them.
  • Darkk - Saturday, August 16, 2008 - link

    I built a couple of linux host based machines running VMWare and been running solid for months. It's cool being able to run the VMWare console in WinXP to manage and build guest OSes. I used to reformat the test PCs all the time and it got old. Now with with Linux VMware servers running I can just create a virtual guest OS and go from there. Pretty slick. My linux boxes are pure text mode, no desktop or gui.

    For testing purposes I got an exchange 2007 server running in VMWare and it's running just fine. Probably due to the fact it's got 8 gigs of ram and currently GSX 1.5 will only let me use almost 4 gigs for each guest OS. The 2.0 beta lets me allocate full 8 gigs for the guest OS but it's a little buggy at the moment. Still promising tho.


  • phinsn98 - Saturday, August 16, 2008 - link

    Phycial vs Virtual performance is a lost argument. When I can run 70 servers SQL, EXC, CTX and 25 VDIs on four hosts with HA and DRS on a single rack with no major failures in three years and no performance issues that the end users can speak of then the physical box performance argument doesn't stand a chance in my opinion.

    I BELEIVE

    I don't argue that a phycial box running a IO intensive app might perform better but at what expense. (server, licensing, power consumption and maintenance) In the Virtual world build the server from template (20 minutes) give it a name and IP and load your app. Have a nice day. No more maintenance, no more power consumption. Have a nice day. Try and run a physical box at 80 to 90% of it's physical resources in one year and watch it fall on it's face. BAD DAY

    I was not the biggest proponant of virtualization but I've seen it and when you work in an environment where all of your servers and VDIs are manageable from one interface and don't fail, it's a beautiful thing.

    If you want to compare something then do a comparison on the competitors of VMWare.

    Windows IT Pro is currently doing a comparison and I have to say they are pretty biased (SAD) they do a good job of overlooking MS shortfalls with their latest Virtual product like how difficult it is to get it up and runnig compared to ESX and no real VMotion capabilty.

    Just my 2 cents

    For what it's worth I've reading this site, promoting this site since it came on board. Thanks for all the hard work and helping me grow and learn.

    DB
  • InternetGeek - Thursday, August 14, 2008 - link

    I'm not an expert in virtualization so it's possible my question is completely off, but Do you guys have time to add Virtual Box to your tests? http://www.virtualbox.org">http://www.virtualbox.org
  • crewslee - Thursday, August 14, 2008 - link

    I have not had much experience with virtual boxes and personally only use it for running ancient applications on top of xp, virtual box is excelent for the casual user.

    Recently an IT server hosting company for one of our slightly more important customers moved all their servers to virtual boxes with no consultation and we have had nothing but problems with with it ever since. The worst issues are severe performance losses with OCR and indexing of docements with Zylab ( www.zylab.com )

    So my question is, what is the easiest way to detect that an OS is rinnung on virtual hardware when you only have access to a remote desktop. At the monent i am looking at the hardware reported and using the lists provided by the VM companies to match them up.... boring :(
  • yyrkoon - Thursday, August 14, 2008 - link

    Heh, sorry had to get that out of my system. Although as far back as I can remember, it has been a dream of mine to be able to run Windows on top of Linux, and still get good gaming performance(assuming it was possible with current titles because of how video hardware is/is not accessed).

    I have a friend who used to use a custom Xen kernel with Dell hardware for large deployments of Windows(think thousands of systems). I remember him telling me: "there is nothing cooler than watching 2+ VMs working harder(as in getting things done faster), than just using the bare hardware with a single OS." From what he told me his team modified the kernel for their specific needs, and used GbE equipment for deployment of netinstalls. Performance for them was greater or as good as bare metal in most cases.

    Another person I talk to on IRC once in a while has mentioned that he maintains MANY machines(as in a farm), that all use Xen + Linux software RAID 10 for mission critical data, and that the performance is very good.

    My experiences with many para virtual implementations has not been as good however. Then again I do not have a team to customize a Xen kernel for me, or have countless hours of experience/knowledge of how the Xen kernel works. I also have not had the hardware to use, or the need to really use virtualization for other than my above mentioned pipe dream(first paragraph). Although lately, for me I think I would have to say that parallels at least on a Windows host would have to be one of the best performers. VMware is nice, but in the last couple of years seems to be getting bloated and slow. I do not have any hands on with ESX however, and what I speak of is their server/ Workstation variations.

    With all the 'Virtual hosts' popping up over the course of the last several months though, *something* has to be working for *someone* . . .

  • vvelichkov - Thursday, August 14, 2008 - link

    Virtualization of I/O intesive servers is *still* not a very good idea, but the technology is almost there. It is not the ESX 3.5 that is not ready, but the hardware is still not quite available.

    Currently there is only one hardware combination:
    ESX 3.5 Update 2 + Intel 5400 Chipset + Latest Intel PCIe 2.0 10G LAN Adapters + Latest QLogic PCIe 2.0 8Gbps FC HBAs is a good candidate for ESX host supporting I/O intesive servers.

    What is missing:
    1. Intel 5400 Chipset is Server/Workstation mid-range chipset not available in any serious server platforms from HP / IBM / Dell. Supermicro has good mobos with this chipset, also HP has some low-end models. Waiting for Nehalem based Xeons and proper mid-range/high-end server chipset supporting the latest generation Intel VM related features (VMDq2 / VT-x2 / VT-d2)
    2. Intel does not offer any 1000Base-T NICs supporting PCIe 2.0. There is only one integrated Ethernet controller used on some mobos for on-board Ethernet. Using 10G PCIe NICs is quite expensive (still)
    3. ESX 3.5 Update 2 has support for all the above mentioned second generation VM related features, but still in "experimental" mode.

    Let's see...
  • mlambert - Thursday, August 14, 2008 - link

    If you guys are interested in some performance stats (NFS, FC, iSCSI) for ESX datastores & host utilization, NetApp did a really good white paper in July:

    http://media.netapp.com/documents/tr-3428.pdf">http://media.netapp.com/documents/tr-3428.pdf



Log in

Don't have an account? Sign up now