Jul 26, 2011 (06:07 AM EDT)
VMware Pricing Outrage: A Closer Look
Read the Original Article at InformationWeek
VMware has changed how it charges for its virtualization software, imposing memory limitations per server CPU based on the type of license you buy. Now that the outcry in the blogosphere has calmed down, it may be possible to take another look at what it means.
The protesters are correct. VMware has changed its pricing scheme to more closely reflect actual usage. In fact, it will raise prices for many customers, as reported by InformationWeek's summary, although most of VMware's gain will occur in the future, as both host servers and the virtual machines running on them become more powerful.
My colleague Jonathan Feldman, an IT director at a city in North Carolina, cites from first-hand experience a practice of software vendors changing pricing schemes for their own benefit--but leaving 80% of customers unaffected. I think this is what VMware has done.
However, the approach VMware's taking--putting a limit on the amount of memory that can be allocated per CPU based on the customer's license--contradicts advice it has given in the past, advice that was meant to ease fears about further server virtualization. With uncertainty over how well virtual machines would perform, VMware and its third-party installers and consultants urged customers to over-allocate memory as a way of ensuring they would always have enough. The new VMware pricing isn't based on actual memory use but "allocated" memory, an area of hidden inflation in the typical, virtualized data center.
Until now, over-allocating virtual memory remained a harmless over-provisioning. A virtual machine had no need to draw down an allocation that might be twice as much as the RAM it actually used. If above normal peak usage was occasionally needed, the virtual machine could get the memory and seldom generated contention with other VMs, which were also over-allocated. The various applications found on the host server tended to have different peak usage times, and the mix evened out rare spikes. This practice was working well--perhaps a little too well. No one was sure how much memory their VMs actually needed.
Some system administrators want to continue over-provisioning. At some point, this is contrary to the new way of managing the data center. Virtualization lets system administrators use physical resources more flexibly and efficiently, and the ultimate goal is to have every resource used near its capacity without endangering operations. Admittedly, that's not a layup. But as long as system administrators feel entitled to over-provision virtual machine memory, this won't happen. On the other hand, who wants to be the one to step forward and say they know exactly how much of a fluctuating resource needs to be doled out on a long-term basis?
Servers are getting much more powerful with the multi-core designs. And they're getting amounts of memory that dwarf the not-so-distant past when a standard x86 server came with 16 or 32 GB of RAM. Cisco ships Unified Computing System servers with 384 GB of memory. A more typical amount might be 192 GB but even so, both are a far cry from 32 or 64 GB.
VMware's recent announcement focused on the upgrade to Version 5 of its core product, Infrastructure, which generates, configures, and deploys ESX Server virtual machines. Two years ago, it moved from Infrastructure 3 to 4, and the software's value significantly increased as customers swapped out a standard two-way, dual-core server (four cores) for a two-way, quad-core server with eight cores. Think of each core as having sufficient CPU cycles for a single VM. That's not a requirement, just an approximate measure. VMware's upgrade increased the value of Infrastructure significantly, with no change in price. This is another contributor to the current upset this time around. Why can't the value of virtualization software just increase at the pace of Moore's law, with no price changes?
All I can say is that other major software vendors--including IBM, Microsoft, and Oracle--have tried to keep their pricing in step with value and have implemented pricing changes to assure it. At one time, Oracle announced it would count each core as a separate CPU license (which would have caused Weimar Republic-style price inflation) then backed off the stance. Oracle and Microsoft are also unapologetic about surprise customer audits that result in a large additional bill for allegedly unlicensed use.
So far VMware's approach is comparatively more conservative. Infrastructure 5 pricing charges $995 for Standard edition per CPU with a limit of 24 GB of allocated virtual memory; $2,875 for Enterprise edition with a limit of 32 GB per CPU; $3,495 for Enterprise Plus, with a 48-GB limit. If your allocated memory to VMs is much greater than they actually use, it may be possible to shave the VM allocations and continue with charges as before.
This will be difficult for some of VMware's largest customers, say those who have gone the extra step and virtualized their database systems. I asked Sue Workman, associate VP in the office of the CIO at Indiana University, if the pricing changes would affect the school's mix of VMware and Citrix XenServer. "IU will continue to use VMware as our primary virtualization platform for enterprise servers and the Intelligent Infrastructure," she said via email. "Over time, we will use the experience we gain with Citrix/XenServer in supporting client virtualization to evaluate the mix of virtualization."
That strategy shows a long-term possibility of some server migration away from VMware, but not very much outrage over the immediate pricing. And Indiana U is one of the places where the change will have an impact: it has virtualized its memory-hungry Oracle databases.
Customers now have the chance to begin getting a handle on CPU memory usage and stop treating it as a freely available resource. In some ways, that's a drawback, imposing extra work and forcing a decision that might be proven wrong at a later date. But it was probably inevitable this day would come in one form or another.
Efficient data center operation will one day depend on knowing what is an accurate allocation of memory for each VM on a given CPU. Eventually, shops engaged in deep virtualization will gain the means to change that allocation on the fly, with automated systems handling the task and sending the system admin a notice.
Customers will increasingly find themselves confronted with the choice between continuing to pay a premium for advanced virtualization, or settling for the gains they have and resisting further advances with more charges. There are cheaper alternatives available in Citrix XenServer (about two-thirds the price of VMware) and Microsoft's Hyper-V (free as a part of Windows Server 2008 R2), and VMware knows it. Customers hope VMware is not bent on milking their Infrastructure customers. VMware hopes it hasn't guessed wrong on the true value of its virtualization software.
A service catalog is pivotal in moving IT from an unresponsive mass of corporate overhead to an agile business partner. In this report, we chart the new service-oriented IT landscape and provide a guide to the key components: service catalogs, cost and pricing models, and financial systems integration. Read our report now. (Free registration required.)