Archive for category Hyper-V
Today I was working with a client on their next generation data center architecture. They are building a highly virtualized data center with the goal of offering cloud IaaS to other departments within the organization. While talking about VM templates we discussed a favorite topic of mine – virtual hard disk structure.
For several years, I have recommended to clients that they use at least two virtual hard disk files per VM. One virtual disk file is used for the OS and application files, and a second virtual hard disk is used for paging, swap, and temp files. Alternatively, a (Read more...)
With the postponement of Catalyst Europe, I had the opportunity to virtually attend the Microsoft MMS conference keynotes on Tuesday and Wednesday of this week. MMS has long been one of Microsoft’s best conferences, and this year didn’t disappoint. I’m not going to rehash the major announcements, but you can read the full details in the following Microsoft System Center team blog posts:
- System Center Service Manager 2010: An Integrated Platform for IT Service Management
- Configuration Manager vNext: User Centric Client Management
- Mobile Device Management with Configuration Manager vNext
- Configuration Manager 2007 R3 Beta released
- What’s coming up with the next versions of SCOM and VMM!
- User Centric, and System Center Configuration Manager vNext
- MMS 2010 Kicks Off in Las Vegas
Tuesday Keynote – Bob Muglia
- Bob opened by stating that Microsoft’s has been building dynamic IT management for the last seven years as part of their Dynamic Systems Initiative. Microsoft is essentially underlining the fact that they are not newcomers to dynamic IT and cloud and is playing on its strength in systems management.
- Bob highlighted the need for standard service models, and I agree. I started discussing this topic with vendors in 2008 and blogged about standard models in the security context early last year. I recently discussed this issue in my posts on metadata standards and the infrastructure authority. Still, vendors need to move beyond talk about standards for service delivery, metadata, and application interfaces, and deliver them. Mobility to and among cloud infrastructure-as-a-service providers requires these standard models. It’s time for vendors to show their hands, even if they’re holding proprietary service delivery models, metadata sets, and interfaces today. There are far too many competing interests to expect vendors to agree on an industry standard any time soon. Still, progress is being made on the standards front. SNIA’s Cloud Data Management Interface (CDMI), the DMTF’s Open Cloud Standards Incubator, and the Cloud Security Alliance’s work on standard cloud security models are three good examples.
- It would be nice if Microsoft would offer a complete set of documentation on how to stage their on-stage demos. The orchestration practices that were demonstrated are of high value and Microsoft should share the configuration information with their clients.
- Microsoft’s demos were Microsoft-centric, as expected. I would like to see Microsoft demonstrate integration with third party management products, which would strengthen their position on interoperability. Most Gartner clients are not homogenous Microsoft shops; demonstrating orchestration capabilities across multi-vendor management stacks would speak to the needs of the typical enterprise organization. If Microsoft doesn’t want to do this at a conference, then why not offer this information online?
- I thought Microsoft made a great move in acquiring Opalis, and liked seeing the Opalis integration and System Center Service Manager 2010 shown on-stage.
- Microsoft demonstrated long distance VM live migration, and in the process Muglia took a swipe at VMware, noting that moving VMs to new sites requires deep integration and validation across all management services. In the demo, Microsoft was able to show processes such as validating that a recent backup was completed before allowing the inter-site live migration to continue. While the demo was impressive, I would have been even more impressed if Microsoft validated the recent backup by integrating with a third party tool such as NetBackup.
- Microsoft is talking cloud using the terms “shared cloud” and “dedicated cloud.” There are so many disparate terms out there for cloud that pretty soon Rosetta Stone will release a CD on speaking cloud. The Gartner/Burton teams have been working closely on defining a core set of cloud terminology, and its important for vendors in the space to adopt common definitions.
- Edwin Yuen demonstrated System Center Virtual Machine Manager (SCVMM) vNext, which will include drag-and-drop capabilities for deploying multi-tier applications. The demo was powerful, but my existing concerns about SCVMM were unanswered. Today the product is not extensible, and it does not support the Open Virtualization Format (OVF) industry standard; I’m hoping those two features make it in to SCVMM vNext.
- Microsoft’s demo of cloud service management looked solid from the administrator’s point of view, but nothing was shown from the consumer’s point of view. IT service delivery requires the presentation of services to consumers using intuitive interfaces that the customer understands. Microsoft has yet to show a consumer-centric view of how consumers will interact with Microsoft cloud service management.
Wednesday Keynote – Brad Anderson
- Brad opened by talking about how the Windows 7 release was the most significant event in the desktop space in a very long time. I would counter that equally significant was Microsoft’s announcement to end-of-life (EOL) Windows XP in April 2014. The XP EOL announcement put IT organizations “on the clock” to replace their existing client endpoint OS, and in many cases re-architect all major aspects of user experience and application delivery.
- There was a good discussion about power management, but one interesting area of research that was not mentioned was Microsoft’s work on in-guest VM power management. Take a look at Joulemeter for more information.
- I liked hearing Brad talk about the future desktop representing a convergence of services. This is a concept I recently discussed in the post “The Next Gen Desktop’s Cloudy Future.”
- I saw a bit of irony seeing Microsoft discuss Hyper-V R2 SP1’s dynamic memory feature on stage. A year ago Microsoft was solidly against VMware’s memory overcommit feature, which allows VMs to share a physical pool of memory on a server. Jeff Woolsey did a nice job describing Hyper-V’s dynamic memory capabilities in the following posts:
- Microsoft demonstrated the RemoteFX technology that was acquired from Calista. It will be interesting to see how quickly Microsoft’s IHV partners offer a shipping solution. Several have stated their intent to support the technology.
- Microsoft demonstrated their new Windows InTune product – a cloud service for managing PCs. While I like where Microsoft is taking PC management, I’m still disappointed that they have yet to address desktop OS licensing for cloud-based desktop-as-a-service (DaaS) deployments. Device-based desktop OS licensing is incompatible with the on-demand and device-agnostic nature of cloud service delivery, and Microsoft needs to address this issue sooner rather than later.
- I was disappointed by the System Center Service Manager demonstration on compliance validation. The demo included no mention of virtualization or virtual infrastructure, which is the default x86 application platform of many of our clients. If the product is not providing controls and validation capabilities for multi-tenant VMware vSphere, Microsoft Hyper-V, and Citrix XenServer environments, then it is not ready for prime time.
Overall I was very impressed with the conference keynotes. System Center Service Manager and Microsoft’s increasing integration of the Opalis software are two areas to watch. Muglia’s talk about standard service delivery models also leads me to believe that Microsoft is poised to aggressively go after the cloud provider space. The release of Microsoft’s Dynamic Infrastructure Toolkit and growing number of partners in Microsoft’s Dynamic Data Center Alliance (DDA) is proof of that. What did you think of MMS 2010? I’d love to hear your thoughts.
I recorded a webcast today on the subject of best practices for re-architecting backup and recovery for virtual environments. If you’re interested, you can view the webcast below, or click here to view the webcast in a separate window.
If you missed our latest presentation on hypervisor competitive differences with regards to our evaluation criteria, you can see it for free next week at IT Virtualization Live. The webcast will run Tuesday September 15th at 12:30 ET. To see it, you can register here. The webcast will show our complete evaluation criteria list, and detail how vSphere 4.0, XenServer 5.5, and Hyper-V R2 stack up. The webcast also includes a series of tables that outline side-by-side comparisons between each hypervisor. If you’re interested, here is the webcast abstract.
Hypervisor Competitive Differences: What the Vendors Aren’t Telling You
You mean there are differences between the hypervisors from Microsoft, Citrix, VMware, and others? Of course, and making the right decisions about which to implement are critical for your virtualization success! In this session, analyst Chris Wolf dissects the competitive differences that exist with today’s leading hypervisors, with a special focus on the under-the-hood features that don’t make it onto vendor data sheets. Attendees of this session will see firsthand the differences that exist with all major virtualization hypervisor vendors (e.g. VMware, Microsoft, Citrix, and Virtual Iron) and will leave with a list of pointed questions to ask prospective hypervisor vendors regarding their current solutions and future plans.
I get quoted quite a bit in my role as an analyst and on most occasions I agree with the context in which my quotes are used. Sometimes I see my quotes appear in articles like this one and feel the need to fully articulate my position. First let me say that I like and respect Alex Barrett. I’ve worked with her for a number of years and she is one of the best in the business. However, sometimes we’re not on the same page. Heck, I’ve been married for over 12 years and I’m not always on the same page with my wife either. It happens.
That all being said, I want to clarify some of my points in the article. The first one is with vendor incentives for reference accounts. It’s no secret in IT circles that vendors will make deals with reference accounts to compensate for their time. Incentives could come in the form of discounted software, professional services, etc. A few folks took my statements to imply that I thought Microsoft was bribing Nissan. That’s not the case at all. I was talking about vendor incentives in a general context and not specific to Microsoft. I encourage clients to pilot multiple hypervisors even if they are set on a particular vendor solution. Why? It gives them leverage when negotiating price. If an IT organization isn’t aggressively negotiating with vendors to secure discounts in the sales cycle, they’re not trying hard enough. VMware and Citrix offer aggressive discounts to win deals too. In the specific case of Microsoft and Nissan, I can’t comment because I haven’t spoken with the Nissan folks. Again, my quote was aimed at identifying a longstanding vendor process. While vendors may offer incentives to reference accounts, they are typically very forthright with encouraging them to be completely honest about their experiences, and that’s always been the case with the Microsoft reference customers I’ve worked with.
So why would Nissan implement Hyper-V even though they were initially hurt by its lack of live migration? The answer is pretty common. Again, I haven’t spoken with Nissan directly, but I’ll speculate. Server virtualization is an infrastructure technology. It’s sticky. When you lay down a virtual infrastructure, there can be a significant cost to replace it. For starters, you may have to convert VMs from one format to another. That could involve converting virtual disks and replacing paravirtualized device drivers, as well as downtime. On top of that, you may be replacing management tools and retraining users on self-service portals. That brings me back to Microsoft. Their message all along has been “Trust us. We’re not their yet, but we will be.” Many longstanding Microsoft shops trust Microsoft. They’d rather give up a feature or two than rip and replace their virtual infrastructure in a few years. This strategy allows them to grow with Microsoft without having to be concerned about the costs of switching hypervisors. Many of these organizations are already using many of the Microsoft System Center tools to manage their infrastructure, and they know they have a familiar interface in System Center Virtual Machine Manager.
At Burton Group, we have clients that have been using Microsoft virtualization since Virtual Server 2005. I’m not talking about mom and pop shops either. I’ve worked with very large US federal agencies that have sizable Virtual Server (and now Hyper-V) deployments. Again, numerous business drivers complemented the technical drivers in product selection. Did I work with one organization that thought the technology was forced on them? You bet. That happens across all product spectrums. In what now seems like ions ago when I worked for CommVault Systems, I would go into some pretty hostile environments that wanted nothing to do with the CommVault software. This also happens with virtualization deployments. When certain members of an IT organization feel they aren’t involved enough in the process, they aren’t as enthusiastic about the implementation, to say the least. Still, as Alex also noted, I told her about a half dozen or so clients who were completely satisfied with Hyper-V from the get-go.
There is never a one-size-fits-all solution when it comes to technology, and virtualization is no different. Sure I talk about VMware products with a lot of clients, but I also talk about Microsoft and Citrix virtualization quite a bit too. Technology influences product decision, but company culture, product familiarity, cost, and established vendor relationships do as well.
If you want to hear more about hypervisor differences, I encourage you to stop by session TA2400 “Hypervisor Competitive Differences: What the Vendors aren’t Telling You” next week at VMworld North America. We’ll be doing a side-by-side comparison of vSphere 4.0, XenServer 5.5, and Hyper-V R2. You’ll see dozens of low level technical differentiators that may appear similar on a data sheet. In the session, you’ll see a lot of check boxes. As you know, product selection goes beyond counting check boxes. Look at the features that are most important to you and the platforms that are strategically aligned with your long term IT direction. As I said before, the virtualization layer is an infrastructure technology and product selection isn’t always about “right now” for all organizations. In my opinion, that’s a big reason why Nissan went with Hyper-V.
After taking the weekend to catch my breath and get some much needed rest, I thought it would be a good time to reflect on the highlights from last week’s Catalyst North America conference. I’d like to start with recaps of last week’s opening day cloud and virtualization sessions.
Wednesday AM – Defining the Cloud
Drue Reeves chaired the Wednesday morning cloud track and offered his thoughts on the morning here. The morning offered some very interesting perspectives from early cloud adopters International Hotels Group and Eli Lilly. Later in the morning, Drue challenged Peter Coffee, Director of Platform Research at Salesforce.com, on the fact that Salesforce.com does not offer customers an SLA. I received feedback from many clients who were glad that Drue asked the question. Peter indicated that Salesforce.com has thrived in spite of not offering an SLA, and it’s hard to argue with their success. However, I don’t know of many Burton Group clients who will consider putting production workloads on cloud-based infrastructure as a service solutions without an SLA. Amazon – I hope you’re listening. Speaking of service providers, I found the announcement from VMware’s Raghu Raghuram that VMware now has over 700 service providers on board to be impressive. None of VMware’s competitors are anywhere near that number. Sure – it’s very early in the public cloud era (too early for most of our clients to put production resources on it due to security/compliance concerns), but it’s clear that VMware is being very aggressive. I’m curious to see how many VMware service provider partners are locked into exclusive agreements with VMware, as that may impact Microsoft and Citrix’s ability to rival VMware when it comes to provider choice.
Wednesday PM – Server Virtualization: The Foundation for Cloud Infrastructure
The afternoon track focused on server virtualization topics as they relate to public and private cloud infrastructures. During my opening keynote, I highlighted the following points:
- The time to rework business processes to support a service-oriented model for IT service delivery (i.e., internal cloud) is right now. IT must own the physical infrastructure assets and offer them as a service. In 5-10 years, public cloud providers will compete against internal IT for the rights to host business applications. IT organizations that do not build out a service delivery model may find themselves like many US factory workers, and be faced with the realities of unemployment.
- Improvements in x86 hardware (e.g., hardware-assisted memory virtualization) are allowing most top tier x86 applications to successfully run in VMs. Application owners should now have to justify why an app cannot run in a VM, instead of the other way around.
- Tools that can fully visualize the data path are important for application troubleshooting as well as security/compliance auditing and enforcement. End-to-end (i.e. network and storage) views remain a challenge today, and additive virtualization layers (e.g., storage, network, and I/O) can further complicate visualization efforts.
- Development practices need to evolve so that applications can leverage dynamically allocated resources within the virtual infrastructure. Sure, unaltered apps run in the virtual infrastructure, but they could run more efficiently (more on that shortly).
- Burton Group is working with several vendors and end user organizations to build reference architecture models for internal cloud. You can see a preview of the initial work here.
Mark Templeton (CEO, Citrix)
Mark Templeton (Citrix CEO) followed my presentation by sharing his vision of cloud-based infrastructure. Templeton spent considerable time outlining how IT organizations must move to a service-oriented model. IT creates and advertises services. Users and business units purchase them (like ordering a channel package or pay-per-view movie). Templeton made a great point when he stated that to improve IT efficiency and reliability, you need to eliminate parts. According to Templeton, IT service delivery should have the following characteristics:
- Any-to-any, secure when needed
- Device, network & content independence
- Self-service desktop & app provisioning
- Elastic service capacity & infrastructure
- Consumption-based, variable costs
During the Q&A, an attendee asked Templeton about XenClient (Citrix’s bare metal client hypervisor) on a Mac. Templeton’s response – “Ask me again by the end of the year.” Templeton hinting that Apple is ready to support a Mac client hypervisor with the ability to run Mac and Windows OSs side-by-side is very big news. If Citrix gets the client hypervisor on Mac (and VMware doesn’t), it could tip the scales for desktop virtualization dominance in Citrix’s direction. Citrix hinted at much of its Mac strategy at their Synergy conference, and I’m looking forward to hearing what VMware has to say at VMworld.
Matt Lavallee (Director of Technology, MLS Property Information Network)
Matt Lavallee followed Templeton and provided a great amount of detail on his multi-site Hyper-V implementation. Lavallee is using iSCSI over 10GbE using software initiators using 802.3ad NIC teaming, and orchestrating management activities using System Center Operations Manager.
That’s it for Part I. In the second part of my server virtualization recap, I discuss “The Thrilla in California,” Mark Russinovich’s thoughts on new development trends, and hypervisor competitive differences.
Yesterday Microsoft announced what I believe was a brilliant move – Hyper-V paravirtualized drivers (Microsoft calls them Integration Components) released under GPL 2.0. This announcement reflects Microsoft’s long term Linux strategy, in my opinion, and is the first step toward positioning Hyper-V as a platform for hosting Linux workloads. Getting Hyper-V paravirtualized device drivers in the mainline Linux kernel will simplify deployment of Linux-based VMs on the Hyper-V platform in coming years. In the short term, it’s important for Microsoft’s key partners (Novell and Red Hat) to backport the paravirtualilzed drivers so that their currently shipping Linux distros will run more efficiently on Hyper-V. Given Microsoft’s close partnerships with Novell and Red Hat, I see SUSE Linux Enterprise Server 10/11 and Red Hat Enterprise Linux 5 support as foregone conclusions.
With this move, I think that Microsoft has acknowledged that Linux is here to stay, and has provided additional momentum to the growing number of Linux-based VM appliances. A growth in Linux-based VM appliances benefits everyone, including the Linux community and VMware (who hosts the Virtual Appliance Marketplace). Microsoft still has more work to do on this Linux front. Open sourcing Hyper-V paravirtualized drivers was a great first step. Next up, I would like to see Microsoft support Linux VMs with multiple virtual CPUs on Hyper-V. This opens the door for Microsoft to tout Hyper-V as a platform for production-class Linux workloads.
To take this a step further, let’s turn back the clock to October 2008. At Catalyst Europe, I asked Steve Herrod and Ian Pratt about VMware and Citrix collaborating on an exchange of device driver libraries to further reduce VM compatibility issues between hypervisors, and both agreed to continue the conversation (more details here). Citrix XenServer and Microsoft Hyper-V share device driver libraries and include the driver libraries for each platform as part of their paravirtualized driver installation (i.e., XenServer Tools and Hyper-V Integration Components). With Microsoft releasing Hyper-V paravirtualized Linux drivers, I think it’s a good time to revisit the idea of an open source driver framework that supports the core paravirtualized driver libraries of each major hypervisor platform (ESX, Xen, Hyper-V, and KVM). Sure, we’ll need a community/standards body (or whatever you want to call it) to manage driver library updates, but I can’t see why it isn’t possible. Such collaboration would make life easier for everyone. Imagine being able to run a few tests on one of your VMs “in the cloud,” and not having to care what the hypervisor is. Isn’t that what cloud’s supposed to be about anyway? Here’s my service level and security requirements. Can you give me the service I need?
Yes I understand that my cloud analogy is overly simplistic and there will always be some benefits to having a consistent virtual infrastructure both internally and with external providers. Still there are times when such consistency isn’t needed, and that’s why shared driver libraries make a lot of sense (besides removing another barrier to vendor lock-in). VM configuration metadata is addressed with Open Virtualization Format (OVF). From a technology perspective, nothing is preventing collaboration on a common VM device driver framework and shared driver libraries (that is something I’d love to see in the mainline Linux kernel). And finally, hypervisor vendor support for both .vmdk and .vhd virtual hard disk formats is the last major hurdle blocking VM compatibility (without the necessary conversions) between hypervisors. Vendors – let’s not talk about cloud openness, open architectures, etc. Let’s do something about it. Microsoft made a great move yesterday. Next I’d like to see collaboration between all virtualization vendors that further promotes choice among users, IT departments, and service providers. VMware, Microsoft, Citrix, Novell, Red Hat, Oracle – what do you say?
I’ll have four sessions at the upcoming VMworld Conference. If you’re interested, here are the details…
BC2541 – Re-architecting Backup and Recovery for Virtual Environments: Best Practices
Server virtualization is one of the fundamental building blocks of the dynamic data center and with it brings new management challenges, especially in the area of data protection and recovery. Existing data protection architectures may provide a short term serviceable solution, but lack the scalability to be a mainstay in tomorrow’s data center. Continued data growth is also compounding data protection complexity, as enterprises must accommodate data growth by increasing backup system performance in order to stay within backup windows for data protection. We are at a time where organizations should reevaluate existing data protection practices and leverage new technologies to improve data recovery and lessen or eliminate the performance tax posed by many existing data protection architectures. This session breaks down modern VM data protection solutions, including VMware Consolidated Backup (VCB), array-level snapshots, and third party enterprise backup software solutions. Attendees will be exposed to common data protection pitfalls as well as successful blueprints for modern VMware data protection architectures. Chris Wolf has been architecting data protection solutions for enterprise virtualization environments since 2002 and includes an abundance of lessons learned and best practices drawn from real world implementations in this session.
DV2439 – Breaking Down Desktop Virtualization Alternatives
Numerous methods exist for delivering applications to endpoint devices today: virtual desktop infrastructure (VDI), application streaming, presentation virtualization, and hybrid approaches. The session breaks down the use cases that drive client virtualization choices and highlights future developments such as desktop hypervisors that will likely impact long term client virtualization architectures.
EA2442 – Software Licensing in the Virtual Enterprise: Current Problems and Future Trends
Virtual environments present new challenges for software license management across an enterprise. In this session, Burton Group senior analyst Chris Wolf breaks down the current state of software licensing and support for both server and desktop virtualization environments, while highlighting the technical elements of the virtual infrastructure that impact product licensing. He will also describe the licensing and support model best suited for modern virtualization platforms, with examples of vendors that provide best-in-class virtualization licensing policies today. All major enterprise application and OS vendors will be covered, including Microsoft, Sun, Red Hat, Novell, Oracle, HP, IBM, CA, SAP, Symantec, and Citrix. The session concludes with guidance on how to leverage RFPs to obtain licensing and software support clarity.
TA2400: Hypervisor Competitive Differences: What the Vendors Aren’t Telling You
In this session, Chris Wolf and Richard Jones dissect the competitive differences that exist with today’s leading hypervisors, with a special focus on the under-the-hood features that don’t make it onto vendor data sheets. Attendees of this session will see firsthand the differences that exist with all major virtualization hypervisor vendors (e.g. VMware, Microsoft, Citrix, and Oracle) and will leave with a list of pointed questions to ask prospective hypervisor vendors regarding their current solutions and future plans. Vendor scorecards will also be presented, allowing attendees to see how each major hypervisor ranks against Burton Group’s enterprise production-class hypervisor evaluation criteria. Areas where hypervisors fall short of production readiness will be clearly highlighted as well.
In case you haven’t seen, Oracle issued a major product support update last month – Platform Vendor Virtualization Technologies and Oracle E-Business Suite – Metalink Note 794016.1 (note that an Oracle support account is needed to view the update). The bottom line – Oracle now offers best effort support for all of its E-Business Suite applications on any x86 hypervisor. Shocked? Here’s a snippet of the support statement:
The use of platform vendors’ virtualization technologies (both software and hardware based) to host Oracle E-Business Suite 11i and R12 is covered by Oracle’s policy with regards to 3rd-party products – that is, they are ‘not explicitly certified, but supported’.
What this means is that while these technologies are not certified, Oracle will not turn away a customer reporting an issue solely due to the use of these technologies. When possible Oracle will triage and attempt to diagnose the issue reported – Oracle support may attempt to replicate the issue in a non-virtualized environment and work with the customer to verify if the problem exhibits in such an environment.
Any specific problem isolated to the virtualization software (i.e. a problem that cannot be reproduced in a standard, non-virtualized environment) will need to be referred to the specific vendor for resolution.
Customers should review all relevant Oracle documentation on the use of such virtualization technologies for known issues and limitations with respect to EBS technology components such as the database, RAC, etc.
Customers intending to use 3rd-party products covered under this policy in production environments should conduct appropriate levels of testing and also have contingency plans to revert to a standard certified configuration (that is, non-virtualized environment)…
So there you have it. Back in December I suggested that Oracle make two New Year’s resolutions:
- Offer best effort support for all major x86 server virtualization hypervisors
- Offer virtual CPU-based licensing for all of its server applications
The year isn’t even half way over, and Oracle can cross the first resolution off its list. Next up has to be software licensing. Oracle considers its own x86 hypervisor, Oracle VM (OVM), a platform capable of supporting hard partitioning (see the Oracle “Partitioning” document for more information). By its definition of “hard partitioning” Oracle allows virtual CPU-based licensing on OVM, but does not allow it on other popular x86 hypervisors such as VMware ESX, Microsoft Hyper-V, or Citrix XenServer. Oracle also allows virtual CPU-based licensing on Amazon EC2, which runs the open source Xen hypervisor (you can read more about that policy here). Updating the support policy was a great first step, and Oracle should be commended for responding to the needs of its customers.
Now how about knocking out New Year’s resolution #2 before the end of June? Oracle, I know you can do it. Your friends in the enterprise software space that offer CPU-based licensing, such as IBM and Microsoft, both allow licensing to virtual CPUs on any major hypervisor. Binding a license to a physical CPU is “so 2007.” Oracle, no doubt you’re in the middle of a major makeover, and acquiring Sun was a good move. I must say, with the Sun portfolio, I love your wardrobe. However, your licensing policy doesn’t reflect your new look or attitude. To stay with the wardrobe analogy, you’re wearing some great clothes, but you need to lose the mullet.
Oracle, let’s complete the makeover. Modernize your licensing policies and your body of actions will show that you are a company that is truly one with the times.
Note: Within two days of this post’s publication, Oracle made a massive revision to the support document “Platform Vendor Virtualization Technologies and Oracle E-Business Suite – Metalink Note 794016.1. Please see my latest post for the most up to date analysis of Oracle licensing and support.
Sure, we’ve heard it all before. At the end of the day when it comes to performance benchmarks, someone is always going to complain. Ask a vendor representative a performance question, and you may get the answer taught in IT Marketing 101 classes around the world -
For performance questions, the ‘ol reliable “It depends…” always seems to work as a convenient escape tactic. Yes, I know… it really does depend. Really! Now let me get to my point.
If you haven’t seen the latest drama around hypervisor benchmarks, Rick Vanover’s recent Virtualization Review magazine article “Lab Experiment: Hypervisors” is where you should start. You should then head over to the Windows Virtualization Team blog and read Patrick O’Rourke’s take, and follow it up with Eric Horschman’s take on the VMware Virtual Reality blog. For some more perspective, these blog posts provide additional commentary:
- Say it isn’t so: Hyper-V and XenServer outperform ESX (Jason Boche)
- Reaction to “Say it isn’t so: Hyper-V and XenServer outperform ESX” (Ken Cline)
Over the last couple of days, I’ve had the opportunity to speak with Scott Drummonds on VMware’s performance team, as well as to Keith Ward (Virtualization Review Magazine’s editor) and Rick Vanover (the author of the article that sparked the latest performance debate). I spoke with Keith and Rick after reading Eric Horschman’s post, which passionately defended the need for VMware’s EULA restriction on public benchmarks. Keith Ward is one of the most meticulous editor’s that I know, and I’ve known Keith for about nine years. So I was surprised that Keith would publish any benchmark that violates the VMware EULA. Rick is also a stand-up guy, so I doubted that Rick would write something that violates a EULA restriction either. To make a long story short, it appears that there were some communication disconnects between VMware, Rick, and Keith. Keith and Rick thought they had approval from VMware on the test methodology. It’s also clear to me that VMware thought otherwise.
One of VMware’s issues with the Virtualization Review benchmark stems from the fact that in the third test, VMware ESX 3.5 took over 50 seconds longer to complete a SQL job than Hyper-V. In his post, Horschman states:
The fact that ESX is completing so many more CPU, memory, and disk operations than Hyper-V obviously means that cycles were being used on those components as opposed to SQL Server.
He’s right, and I agree that the added CPU (40% greater), memory, and disk operations noted in the ESX benchmark would degrade the SQL job response time.
Now let’s shift gears and talk about a benchmark that I believe VMware had no objections with – the recent Network World hypervisor bake off. VMware wants industry standard benchmarks? Well that’s what Network World used, and in some tests ESX lagged XenServer and Hyper-V. Take a look at the SPECjbb2005 results, which included a total of 12 tests using Windows Server 2008 and SLES 10 guests, vCPUs ranging from 1 to 4, and VMs ranging from 1 to 6. In the most extreme test, six 4-vCPU VMs were run on a 4-way quad core host (total of 16 cores), resulting in CPU oversubscription of 1.5:1 (24 total vCPUs on 16 physical cores). Here’s the bottom line. There were 12 SPECjbb2005 tests, XenServer had the best results in 9 tests, while ESX, Hyper-V, and Xen on SLES 10 were tops in one test. ESX was consistently second in the tests it didn’t win. The Network World I/O tests revealed that Xen on SLES 10 performed best, primarily due to the fact that Novell enables write caching by default.
I’m mentioning the Network World benchmark because the Virtualization Review benchmark was not the first time a major publication offered hypervisor performance results favorable to VMware’s competitors.
Here’s the deal. What does it all get back to? It depends! Hypervisor performance is very workload-specific, so even industry standard benchmarks like SPECjbb2005 are not without fault. You need to test a series of workload patterns that mirror your environment in order to draw a full conclusion. We constantly advise our clients to P2V or V2V their existing systems for internal hypervisor performance testing. That provides the best idea of what’s important – how the hypervisor performs in your environment, with your workloads (that was Rick’s intent in his article). Outside of customized internal testing, SPECvirt is our best hope. For the SPEC Virtualization Committee, I offer this advice – take your time… but hurry up! Yes, please do your diligence to get the benchmark right, but we really don’t want to wait another couple of years for it. The sooner you can provide a vendor-neutral virtualization benchmark, the better.
Until SPEC delivers SPECvirt, it’s going to be up to us, the virtualization community, to carry the torch. Vendor benchmark standards such as VMware’s VMmark are going to help you compare how a hypervisor performs on different server platforms (such as HP and IBM), but will not be trusted for comparing different hypervisors. Have you heard Simon Crosby or Mike Neil encourage anyone to use VMmark? Don’t get me wrong. I’m a fan of VMmark, and think it’s the most comprehensive virtualization benchmark we have. However, it’s owned and maintained by a vendor, so it’s not something that any magazine or independent analyst firm (such as Burton Group) can use for comparative purposes. SPECvirt is needed to diminish the “it depends” factor with virtualization performance evaluations, and give us something that all vendors can agree on.
At Burton Group, we’re doing quite a bit of research to help add clarity to hypervisor performance considerations and evaluations, and you’ll be hearing more from me on that later. Also, I’m waiting on one final committment for a hypervisor performance debate to occur at our Catalyst Conference in July. More details on that once I have the final speaker confirmed. In the mean time, if something in a vendor benchmark doesn’t look right, tell the world. I blogged about what I thought was a suspect benchmark last summer, and will continue to do that when I see others that I feel are deceptive, whether intentional or not.