Archive for category Licensing and Support
If you’re attending Synergy this year and wanted to stop by one of my sessions, I thought I would post them here. I’m participating in four sessions at this year’s conference:
Geek Speak Tonight!
Date / Time: Tuesday May 11, 4:30 pm – 6:45 pm
Room:Moscone West Convention Center – Moscone 2000-2002
Speakers: Shawn Bass, Simon Crosby, Rick Dehlinger, Martin Duursma, Chris Fleck, Stephen Greenberg, Michael Harries, Alexander Ervik Johnsen, Harry Labana, Brian Madden, Brad Pedersen, Rene Vester, Chris Wolf
A series of open, lively discussions led by respected industry thought leaders will kick off Synergy, giving you a chance to start the conference with a breath of fresh, unfiltered air.
CTPs and CTOs on the future of desktops
4:30 p.m. – 5:15 p.m.
The desktop is primed for unprecedented change in the near future. Join a high-powered discussion and debate among CTPs and CTOs on exactly how that future will unfold.
Cloud computing: what makes it successful and what’s on the roadmap?
5:15 p.m. – 6:00 p.m.
Cloud computing is the term du jour in our industry. Where is the real activity happening? How will the cloud unfold over the next few years? How will the cloud affect you? Where are the opportunities for Citrix customers and partners? Join us for a discussion on cloud computing’s hot areas and risks, and what’s coming next.
XenDesktop 4: a new look at the VDI landscape
6:00 p.m. – 6:45 p.m.
Last year, Shawn Bass stirred the pot again by stating there were still some deficiencies in the various VDI solutions on the market. Twelve months have passed and Citrix has introduced XenDesktop 4. What does the VDI landscape look like today?
The Debate is Raging – Concurrent vs. User Based Licensing Models
Track: Geek Speak Live!
Date / Time: Wednesday May 12, 1:00 pm – 1:45 pm
Room: Solutions Expo Hall – Lounge
Speakers: Stephen Greenberg, Joe Shonk, Chris Wolf
With the advent of application virtualization and cloud computing solutions, licensing has become a key factor in project cost modeling and technical implementations. A clear understanding of the available options, and how and where to apply them, are critical. This panel of industry experts will present and debate the various licensing models and use cases, and contrast their views with current industry practices. Topics will include the available types of license models and their relative merits, common challenges posed by licensing in planning and implementing products and the experts’ views on industry standards and their feedback to the major vendors.
Server-hosted Virtual Desktops: What the Vendors aren’t Telling You
Session Number: SYN313
Track: Desktop Virtualization
Date / Time: Wednesday May 12, 4:30 pm – 5:20 pm
Room: Moscone West Convention Center – Moscone 2003-2005
Session Type: Breakout
Speakers: Simon Bramfitt, Chris Wolf
Many organizations are planning or implementing server-hosted virtual desktop solutions. In the emerging client virtualization market, it can be difficult to assess vendor platforms due to a lack of defined and accepted standards. In this session, Burton Group analysts Chris Wolf and Simon Bramfitt share the Burton Group’s benchmark for evaluating server-hosted virtual desktop solutions, including criteria for evaluating deployment, management, performance, integration, and user experience. The session concludes with a breakdown and scorecards of popular solutions, including the current Citrix and VMware products.
Heterogeneous Virtual Infrastructures – Practical Solutions for Managing Multi-hypervisor Environments
Session Number: SYN207
Track: Datacenter and Cloud
Date / Time: May 13, 2:00 pm – 2:50 pm
Room: Moscone West Convention Center – Moscone 2014
Session Type: Breakout
Speaker: Chris Wolf
While standardizing on a single virtual infrastructure sounds ideal, many enterprises face the reality of managing multiple virtual infrastructures both inside and outside their datacenters. Multiple virtual infrastructures may reside across business units (i.e., server and desktop), departments, or sites. In addition, client hypervisors (e.g., VMware Player, Microsoft Virtual PC, and Sun VirtualBox) further compound hypervisor and VM image management.
In this session, you will learn:
- Strategies, architectures, and tools for managing heterogeneous virtual environments
- Common interoperability pitfalls and workarounds
- About security, data protection and recovery
With the postponement of Catalyst Europe, I had the opportunity to virtually attend the Microsoft MMS conference keynotes on Tuesday and Wednesday of this week. MMS has long been one of Microsoft’s best conferences, and this year didn’t disappoint. I’m not going to rehash the major announcements, but you can read the full details in the following Microsoft System Center team blog posts:
- System Center Service Manager 2010: An Integrated Platform for IT Service Management
- Configuration Manager vNext: User Centric Client Management
- Mobile Device Management with Configuration Manager vNext
- Configuration Manager 2007 R3 Beta released
- What’s coming up with the next versions of SCOM and VMM!
- User Centric, and System Center Configuration Manager vNext
- MMS 2010 Kicks Off in Las Vegas
Tuesday Keynote – Bob Muglia
- Bob opened by stating that Microsoft’s has been building dynamic IT management for the last seven years as part of their Dynamic Systems Initiative. Microsoft is essentially underlining the fact that they are not newcomers to dynamic IT and cloud and is playing on its strength in systems management.
- Bob highlighted the need for standard service models, and I agree. I started discussing this topic with vendors in 2008 and blogged about standard models in the security context early last year. I recently discussed this issue in my posts on metadata standards and the infrastructure authority. Still, vendors need to move beyond talk about standards for service delivery, metadata, and application interfaces, and deliver them. Mobility to and among cloud infrastructure-as-a-service providers requires these standard models. It’s time for vendors to show their hands, even if they’re holding proprietary service delivery models, metadata sets, and interfaces today. There are far too many competing interests to expect vendors to agree on an industry standard any time soon. Still, progress is being made on the standards front. SNIA’s Cloud Data Management Interface (CDMI), the DMTF’s Open Cloud Standards Incubator, and the Cloud Security Alliance’s work on standard cloud security models are three good examples.
- It would be nice if Microsoft would offer a complete set of documentation on how to stage their on-stage demos. The orchestration practices that were demonstrated are of high value and Microsoft should share the configuration information with their clients.
- Microsoft’s demos were Microsoft-centric, as expected. I would like to see Microsoft demonstrate integration with third party management products, which would strengthen their position on interoperability. Most Gartner clients are not homogenous Microsoft shops; demonstrating orchestration capabilities across multi-vendor management stacks would speak to the needs of the typical enterprise organization. If Microsoft doesn’t want to do this at a conference, then why not offer this information online?
- I thought Microsoft made a great move in acquiring Opalis, and liked seeing the Opalis integration and System Center Service Manager 2010 shown on-stage.
- Microsoft demonstrated long distance VM live migration, and in the process Muglia took a swipe at VMware, noting that moving VMs to new sites requires deep integration and validation across all management services. In the demo, Microsoft was able to show processes such as validating that a recent backup was completed before allowing the inter-site live migration to continue. While the demo was impressive, I would have been even more impressed if Microsoft validated the recent backup by integrating with a third party tool such as NetBackup.
- Microsoft is talking cloud using the terms “shared cloud” and “dedicated cloud.” There are so many disparate terms out there for cloud that pretty soon Rosetta Stone will release a CD on speaking cloud. The Gartner/Burton teams have been working closely on defining a core set of cloud terminology, and its important for vendors in the space to adopt common definitions.
- Edwin Yuen demonstrated System Center Virtual Machine Manager (SCVMM) vNext, which will include drag-and-drop capabilities for deploying multi-tier applications. The demo was powerful, but my existing concerns about SCVMM were unanswered. Today the product is not extensible, and it does not support the Open Virtualization Format (OVF) industry standard; I’m hoping those two features make it in to SCVMM vNext.
- Microsoft’s demo of cloud service management looked solid from the administrator’s point of view, but nothing was shown from the consumer’s point of view. IT service delivery requires the presentation of services to consumers using intuitive interfaces that the customer understands. Microsoft has yet to show a consumer-centric view of how consumers will interact with Microsoft cloud service management.
Wednesday Keynote – Brad Anderson
- Brad opened by talking about how the Windows 7 release was the most significant event in the desktop space in a very long time. I would counter that equally significant was Microsoft’s announcement to end-of-life (EOL) Windows XP in April 2014. The XP EOL announcement put IT organizations “on the clock” to replace their existing client endpoint OS, and in many cases re-architect all major aspects of user experience and application delivery.
- There was a good discussion about power management, but one interesting area of research that was not mentioned was Microsoft’s work on in-guest VM power management. Take a look at Joulemeter for more information.
- I liked hearing Brad talk about the future desktop representing a convergence of services. This is a concept I recently discussed in the post “The Next Gen Desktop’s Cloudy Future.”
- I saw a bit of irony seeing Microsoft discuss Hyper-V R2 SP1’s dynamic memory feature on stage. A year ago Microsoft was solidly against VMware’s memory overcommit feature, which allows VMs to share a physical pool of memory on a server. Jeff Woolsey did a nice job describing Hyper-V’s dynamic memory capabilities in the following posts:
- Microsoft demonstrated the RemoteFX technology that was acquired from Calista. It will be interesting to see how quickly Microsoft’s IHV partners offer a shipping solution. Several have stated their intent to support the technology.
- Microsoft demonstrated their new Windows InTune product – a cloud service for managing PCs. While I like where Microsoft is taking PC management, I’m still disappointed that they have yet to address desktop OS licensing for cloud-based desktop-as-a-service (DaaS) deployments. Device-based desktop OS licensing is incompatible with the on-demand and device-agnostic nature of cloud service delivery, and Microsoft needs to address this issue sooner rather than later.
- I was disappointed by the System Center Service Manager demonstration on compliance validation. The demo included no mention of virtualization or virtual infrastructure, which is the default x86 application platform of many of our clients. If the product is not providing controls and validation capabilities for multi-tenant VMware vSphere, Microsoft Hyper-V, and Citrix XenServer environments, then it is not ready for prime time.
Overall I was very impressed with the conference keynotes. System Center Service Manager and Microsoft’s increasing integration of the Opalis software are two areas to watch. Muglia’s talk about standard service delivery models also leads me to believe that Microsoft is poised to aggressively go after the cloud provider space. The release of Microsoft’s Dynamic Infrastructure Toolkit and growing number of partners in Microsoft’s Dynamic Data Center Alliance (DDA) is proof of that. What did you think of MMS 2010? I’d love to hear your thoughts.
There are some physical things in life that I like to associate with as part of my identity. My Jeep Wrangler is one of them. My laptop is not. How I live and work is not defined by a physical compute device, but rather by my online identity. Email, Twitter, accessing documents and meeting notes on SharePoint, tracking client engagements in Salesforce, evaluating and testing virtualization solutions in my lab – those are all part of my day. My PC? It’s just what connects me to my online work space and personal space. Sorry Latitude D820, but you mean nothing to me.
From talks with colleagues and clients, I know I’m not alone. My device doesn’t define me, and certainly doesn’t hold all the data and applications I need to do my job. So what does all this have to do with Microsoft licensing? Well in my opinion, they’re getting it.
I’m not going to rehash much of the excellent commentary already out there on yesterday’s Microsoft announcement. See these posts for more background information on the Microsoft announcement:
- The sleeping giant awakes – Microsoft gets desktop virtualization right (Simon Bramfitt)
- Microsoft announces changes in desktop/server virtualization and VDI strategy – UPDATED (Alessandro Perilli)
- Microsoft VDI Inertia (J. Tyler Rohrer)
As Simon noted in his blog, VECD licensing is going away and the right to run a desktop OS as a server-hosted virtual desktop is now included with Microsoft’s Software Assurance (SA). For devices not covered by SA, organizations can purchase Virtual Desktop Access (VDA) licenses at a cost of $100 per device per year. Furthermore, Microsoft’s license transfer restrictions still apply. So if you want to license a virtual desktop for external contractors, you can assign a VDA license to each contractor system. If a contractor completes a project and leaves, you can re-assign the license to another contractor’s device (you just can’t reassign the license more than once per 90 days).
Simon also mentioned that “extended roaming rights” is the big deal in the announcement, and it is. While not perfect (Simon describes the issues), it’s a step toward licensing a desktop for a user and not a device (sure technically we’re still talking about device licensing, but the user can access his desktop from a myriad of personal devices). So let’s call it an alpha release of per-user licensing. Does a per-user model solve all of our problems? No. But organizations want it offered as a choice, and it’s good to see that Microsoft is listening to their customers.
Looking past the good news that came out of yesterday’s announcement, considerable work remains. Microsoft has still not addressed the service provider market. Considerable clarity is still needed for licensing virtual desktops on shared infrastructure. For example, if a user needs a Windows desktop for a week, he essentially has to pay for 90 days worth of licensing. Why? Even with VDA, the service provider technically has to associate the VDA with the subscriber’s physical device and can’t transfer it for another 90 days. The result is that desktop-as-a-service (DaaS) is far more costly than it should be. This problem will grow once companies like HP, IBM, and Dell offer client hypervisors, and look to offer services where user desktop VMs are automatically replicated from their personal system to the cloud. Again, this takes us back to a physical device not defining the user. For the IHVs, they get the opportunity to sell additional services to make up for the low margins they see on hardware sales. Sooner or later Microsoft will have to address this issue, and let’s hope it’s sooner.
On the support side, Microsoft’s internal application teams need to step up and offer clear support statements for the leading client and application virtualization platforms. Officially supporting App-V would be a nice first step. The push for Microsoft client applications to fully support the major client virtualization solutions must come from the top. I’m hopeful that Microsoft’s key executives will make that push.
Finally, let’s not forget that even with SA, Windows Server OS licenses cannot be virtualized (without mobility restrictions). Instead, most large enterprises have to upgrade to Data Center edition licenses (for practical purposes) for the sake of virtualizing. I talked about this issue extensively in this post, so I won’t repeat the details here. If lifting licensing constraints for client virtualization is good, I’d argue that doing the same for servers would be even better, especially if you look at the amount of servers already virtualized today.
Microsoft customers – you’re voice is being heard. Now’s a great time to pat Microsoft on the back. However, it’s not time to back off. Keep communicating your licensing needs to Microsoft. It’s clear that they are listening, and taking steps to make your life easier.
This post continues the discussion in my “The Cloud Mystery Machine” post.
Cloud computing and hardware infrastructure as a service (HIaaS), in theory, should allow organizations to move workloads to the cloud and manage licensing just as they always have with managed hosting services in the physical world. However, the problem with current licensing models such as Microsoft’s Service Provider Licensing Agreement (SPLA) is that they require licenses to be bound to physical hardware. Physical hardware bindings removes the capability of IT organizations to manage licenses when they have no idea of the hardware on which their applications reside (it may change from day-to-day). So far, service providers have dealt with the licensing issue by building licensing costs into their service fees. In other words, you need to tell the service provider your application needs and the provider must manage licensing compliance on your behalf. If you want to take your already-purchased Microsoft licenses to the cloud, you’ll need to lease dedicated physical hardware from the service provider.
Asking service providers to take on license management for thousands of applications is impractical and is one more barrier to public cloud infrastructure adoption. Some service providers may support a few dozen applications today, but many organizations have thousands of applications. 2010 marks a year where Microsoft can show industry leadership and change licensing so that application license management is transferred from the service provider to the end user organization. The SP provides the virtual infrastructure, the organization uses it. Application licensing based on concurrency or user seats has always been infrastructure agnostic. Heck, Microsoft already has a similar model with its Client Access License (CAL). All that’s needed is to remove the physical binding requirement for application server licenses. As I’ve said before, we are moving away from device-centric computing. We’re shifting away from hardware as the definition for a user’s working environment, and that includes both client and server applications. It’s time the major players in the enterprise application market evolve their licensing policies to meet the agility requirements of today’s enterprise.
There’s been plenty of fanfare surrounding the release of Citrix XenDesktop 4.0 this week. I plan to blog about the XenDesktop 4.0 technical features after I’ve spent some quality time with them in my lab over the next couple of weeks. However, I’d like to contribute to the XenDesktop 4 conversation by talking about its impact on client software licensing.
It’s no question that user expectations for how they should access corporate resources are changing. Sometimes I need to quickly grab or view a doc from my iPhone. Other times I need to view resources on my laptop while not connected to any network. And of course, most times I’m in my office and can access my applications while fully plugged in. As I see it, I’m just scratching the surface. Thin or zero clients, netbooks, and remote Internet kiosks (such as at a hotel or conference) are increasingly becoming part of the the application access picture.
So what does this all mean? Users just want to get at their data and applications on their terms. IT should be able to provide that level of service, and from a technology perspective we’re getting there. Sure I’d like to see capabilities for deploying endpoint security to devices such as iPhones before considering them enterprise-ready alternatives for application access (and a Bluetooth keyboard wouldn’t hurt either). However, for all the gains we’ve had in technology, client application vendor licensing and support still remains one of the primary barriers to wide scale desktop virtualization deployments.
Microsoft, for example, still licenses desktop OSs and applications by device, or installed instance. Sure the Vista Enterprise Centralized Desktop (VECD) licensing model includes a “home access” provision that allows users to access their virtual desktops from their home computer. However, this licensing model still expects the organization to count devices. I’ve talked to Microsoft about my concerns over per-device licensing and for the most part, we are in full agreement that Microsoft desktop OS and application licensing will need to fundamentally change. However, for an organization as large as Microsoft, this is going to take some time. Any licensing change has a huge impact on existing OEM and sales channels, and is why licensing changes are often incremental in nature.Still, Microsoft has an opportunity to lead the way and show other vendors how to license software for today’s increasingly mobile user. I hope they embrace that opportunity.
The user experience is moving toward an era where user data and applications live in the cloud. In other words, the cloud is their desktop. Sure this could be a server-hosted Windows XP or Windows 7 instance or something different (e.g., client-hosted desktop, virtual applications, or a mix of applications and services delivered by internal IT and external PaaS or SaaS providers). The bottom line – the way we deliver applications and services to users is fundamentally changing. Now is the time for the vendors to define policy that meets the needs of how their users will access applications. I believe the best model for the emerging virtual desktop and application delivery methods to be a per-user model. The point of virtualization and cloud is to abstract (or decouple) the physical dependencies of IT services and applications. We’re very close to being able to seamlessly achieve this with technology. With many organizations planning major desktop virtualization rollouts in 2010, it’s time for application vendors to rework their licensing models. I am advising clients to draw a hard line on licensing requirements when they put prospective vendors through the RFP process, and I advise you to do the same (i.e., require per-user licensing).
Reworking licensing to be user-centric needs to be a top priority among client application vendors heading into 2010. Vendors that insist on binding licenses to physical devices in an increasingly virtualized world are not part of the solution. They’re part of the problem.
I’ve been covering Oracle licensing and support issues in x86 virtualized environments for quite some time, beginning with the January 2008 report “Virtualization Licensing and Support Lethargy: Curing the Disease That Stalls Virtualization Adoption.” You can also view these earlier blog posts for additional background:
- Oracle and the Big Elephant (August 2008)
- A New Year’s Resolution for Oracle (December 2008)
- Oracle Honors its New Year’s Resolution: Non-Oracle x86 Hypervisors are Now Supported (May 2009)
A few weeks ago one of our clients pointed me to a recently published Oracle support article (Metalink 794016.1 published March 27, 2009), which prompted me to write my previous post. That’s when the fun really began. After my last post on May 6th, Oracle published a completely revised version of the Metalink document on May 8th. The document was referenced by the same document ID (794016.1), but had a completely different title and content.
For context, the March 27th version of the Metalink document was titled “Platform Vendor Virtualization Technologies and Oracle E-Business Suite” while the revised May 8th edition of the document was titled “Hardware Vendor Virtualization Technologies on non x86/x86-64 Architectures and Oracle E-Business Suite.” If you recall, the first iteration of the document described how x86 virtualization technologies were supported with the following statement:
The use of platform vendors’ virtualization technologies (both software and hardware based) to host Oracle E-Business Suite 11i and R12 is covered by Oracle’s policy with regards to 3rd-party products – that is, they are ‘not explicitly certified, but supported’.
The support document listed Microsoft, VMware, and Citrix as examples. In the May 8th edition of the support document, the above statement was revised to:
The use of hardware vendors’ virtualization technologies to host Oracle E-Business Suite 11i and R12 follows the same policy as Oracle’s policy with regards to customizations – that is, they are ‘not explicitly certified, but supported’.
Examples of x86 virtualization hypervisors were replaced by the following statement:
This document provides a statement regarding Oracle E-Business Suite (11i, R12) support of Hardware Vendor Virtualization technologies on non x86/x86-64 systems.
The bottom line – the revised support document went from describing support for x86 hypervisors to ignoring them altogether, with the exception of Oracle’s hypervisor Oracle Virtual Machine (OVM). I was told that the revisions were needed to address confusion, but feedback I received from numerous Burton Group clients made it clear that there was no confusion until the May 8th revision was published.
Since early last week, I have had numerous calls with Oracle on the subjects of both licensing and support, and unfortunately the news isn’t all good. Let’s start with the positive. According to Oracle, VMware’s ESX hypervisor “has been supported since November 2007.” As proof, you can view Oracle Metalink document 249212.1 (note that you’ll need an Oracle support account to view the doc). The document states the following:
Oracle has not certified any of its products on VMware virtualized environments. Oracle Support will assist customers running Oracle products on VMware in the following manner: Oracle will only provide support for issues that either are known to occur on the native OS, or can be demonstrated not to be as a result of running on VMware.
If a problem is a known Oracle issue, Oracle support will recommend the appropriate solution on the native OS. If that solution does not work in the VMware virtualized environment, the customer will be referred to VMware for support. When the customer can demonstrate that the Oracle solution does not work when running on the native OS, Oracle will resume support, including logging a bug with Oracle Development for investigation if required.
The statement goes on to say that Oracle RAC is not supported on VMware environments. If you’re looking for additional background on Oracle support for VMware environments, I suggest reading the following other perspectives:
- “What the Oracle / VMware support statement really means…and why” (Jeff Browning)
- “Oracle on VMware – a manifesto…” (Chad Sakac)
- “EMC attacks Oracle on its VMware support policy” (Alessandro Perilli)
Regarding VMware support, here’s the translation – if you call for support and you have a known bug, you’re good to go. If you’ve found a new (previously unknown) bug, you’re first going to have to reproduce the fault on physical hardware before Oracle will help you. Compared to other vendors that support enterprise applications on VMware or x86 virtualization environments, this is one of the most restrictive policies out there. Most enterprise software vendors only require faults to be reproduced on the bare metal if they are directly related to performance that could be attributed to the virtualization layer.
The recent Virtual Iron acquisition further cements the fact that Oracle is serious about virtualization. Microsoft and Citrix both have clear public support statements about virtualization and the hypervisors they support (I’m mentioning these two vendors because they’re both virtualization vendors and enterprise software vendors). Oracle needs to loosen its support restrictions for VMware and all x86 virtualization environments, and needs to broaden its list of supported (but not certified) hypervisors to include Microsoft, Citrix, Novell, and Red Hat.
Finally, as I previously mentioned, the larger problem here is licensing. Oracle is requiring customers who wish to deploy Oracle products on x86 hypervisors to license Oracle software by physical server CPUs. Suppose you had two Oracle Database VMs (each with two virtual CPUs) running on a two-node ESX cluster that uses two four-socket servers. Since it’s possible that you’d have a VM on each node, you’d need to purchase licensing to cover the 8 total sockets. If you ran Oracle’s hypervisor, you could license by virtual CPU, however this is only allowed if you pin the VM to fixed CPU cores by hard coding CPU bindings. You can read more about that here. This does create a slight advantage for OVM over competing products, but by binding a VM to one or more physical CPU cores, you have to give up advanced virtualization functionality such as live migration. If I’m using application-level high availability features, this configuration may not be a big deal and would in turn favor Oracle; however it is far from ideal.
Oracle’s competitors in the database arena allow their products to be licensed by virtual CPU without requiring physical bindings (see Microsoft’s and IBM’s policies), and so should Oracle. Doing so allows VMs to move about the physical infrastructure as required to support IT operations. Binding enterprise software licenses to physical assets is a legacy licensing model, and Oracle is practically alone in their licensing policies.
Oracle’s strategy with regard to licensing is one that I’ve seen before. Oracle is effectively taxing organizations for running Oracle Database in a VM. In most cases, organizations will have to pay increased licensing fees. This policy hurts the customer, and in my opinion is an attempt to stall market adoption while Oracle finishes building out its own x86 virtualization platform.
Oracle, it’s time to classify all x86 hypervisors as “hard partitioning.” Our clients are increasingly deploying enterprise applications on x86 virtualization hypervisors. You’re putting them in a tough position, and many consider the virtual infrastructure the foundation for their cloud architecture. Some clients have told me they are now considering moving forward with DB2 or SQL Server because they are unwilling to pay a penalty to run Oracle on any x86 hypervisor. In the end, our clients shouldn’t have to make that choice. They should have the freedom to run the applications they want on the platform they want. This licensing policy is affecting the bottom line of our clients and could ultimately affect your bottom line too. It shouldn’t have to come to that. Let’s just "right the wrong.” Besides, your “Partitioning” document which describes software licensing for virtual environments was last updated in January 2008. In response to my last blog post, you were able to revise a support statement within two days. How about taking the time to revise a licensing policy that is clearly outdated and places an unnecessary burden on our clients?
In case you haven’t seen, Oracle issued a major product support update last month – Platform Vendor Virtualization Technologies and Oracle E-Business Suite – Metalink Note 794016.1 (note that an Oracle support account is needed to view the update). The bottom line – Oracle now offers best effort support for all of its E-Business Suite applications on any x86 hypervisor. Shocked? Here’s a snippet of the support statement:
The use of platform vendors’ virtualization technologies (both software and hardware based) to host Oracle E-Business Suite 11i and R12 is covered by Oracle’s policy with regards to 3rd-party products – that is, they are ‘not explicitly certified, but supported’.
What this means is that while these technologies are not certified, Oracle will not turn away a customer reporting an issue solely due to the use of these technologies. When possible Oracle will triage and attempt to diagnose the issue reported – Oracle support may attempt to replicate the issue in a non-virtualized environment and work with the customer to verify if the problem exhibits in such an environment.
Any specific problem isolated to the virtualization software (i.e. a problem that cannot be reproduced in a standard, non-virtualized environment) will need to be referred to the specific vendor for resolution.
Customers should review all relevant Oracle documentation on the use of such virtualization technologies for known issues and limitations with respect to EBS technology components such as the database, RAC, etc.
Customers intending to use 3rd-party products covered under this policy in production environments should conduct appropriate levels of testing and also have contingency plans to revert to a standard certified configuration (that is, non-virtualized environment)…
So there you have it. Back in December I suggested that Oracle make two New Year’s resolutions:
- Offer best effort support for all major x86 server virtualization hypervisors
- Offer virtual CPU-based licensing for all of its server applications
The year isn’t even half way over, and Oracle can cross the first resolution off its list. Next up has to be software licensing. Oracle considers its own x86 hypervisor, Oracle VM (OVM), a platform capable of supporting hard partitioning (see the Oracle “Partitioning” document for more information). By its definition of “hard partitioning” Oracle allows virtual CPU-based licensing on OVM, but does not allow it on other popular x86 hypervisors such as VMware ESX, Microsoft Hyper-V, or Citrix XenServer. Oracle also allows virtual CPU-based licensing on Amazon EC2, which runs the open source Xen hypervisor (you can read more about that policy here). Updating the support policy was a great first step, and Oracle should be commended for responding to the needs of its customers.
Now how about knocking out New Year’s resolution #2 before the end of June? Oracle, I know you can do it. Your friends in the enterprise software space that offer CPU-based licensing, such as IBM and Microsoft, both allow licensing to virtual CPUs on any major hypervisor. Binding a license to a physical CPU is “so 2007.” Oracle, no doubt you’re in the middle of a major makeover, and acquiring Sun was a good move. I must say, with the Sun portfolio, I love your wardrobe. However, your licensing policy doesn’t reflect your new look or attitude. To stay with the wardrobe analogy, you’re wearing some great clothes, but you need to lose the mullet.
Oracle, let’s complete the makeover. Modernize your licensing policies and your body of actions will show that you are a company that is truly one with the times.
Note: Within two days of this post’s publication, Oracle made a massive revision to the support document “Platform Vendor Virtualization Technologies and Oracle E-Business Suite – Metalink Note 794016.1. Please see my latest post for the most up to date analysis of Oracle licensing and support.