Archive for category Desktop Virtualization
I submitted four proposals for this year’s VMworld conference and learned that three of them are up for voting. I think the fourth is in a VMworld black hole. I wasn’t notified that it was accepted and it’s not up for voting either. I guess I’ll know soon enough… If you’re curious, that proposed session was titled “Private Cloud Security: Vendor Secrets and Hypervisor Competitive Differences.”
Anyway, if you’re not tired of my jokes and would like to see more of me at VMworld North America or VMworld Europe, you can vote for my sessions online. Note that you will need to register for a VMworld.com account if you don’t have one already.
Here are the descriptions of my sessions that are up for voting.
Server-hosted Virtual Desktops: What the Vendors Aren’t Telling You
Many organizations are beginning to implement or plan server-hosted virtual desktop solutions. Vendor platform assessments in the emerging client virtualization market are often difficult due to a lack of defined acceptable standards. In this session, Burton senior analyst Chris Wolf and analyst Simon Bramfitt share Burton Group’s benchmark for evaluating server-hosted virtual desktop solutions, including criteria for evaluating a solution’s deployment, management, performance, integration, and user experience capabilities. The session concludes with a breakdown and scorecards of popular vendor solutions, including the current Citrix and VMware products.
Simon Bramfitt and I will repeat and expand on our session from Citrix Synergy and will include analysis of Microsoft VDI and Quest vWorkspace in addition to VMware View and Citrix XenDesktop. If time permits we will assess more products too. You can vote for this session here. If you search for “Simon Bramfitt” you will find the session. I’m note sure why my name isn’t listed, but assume the voting system only accommodates one speaker.
Extreme Makeover: Data Protection Edition
Applying legacy data protection architectures to today’s heavily virtualized modern data center comes at a significant price in terms of both performance and consolidation density. We are at a time where organizations should reevaluate existing data protection practices and leverage new technologies to improve data recovery and lessen or eliminate the performance tax posed by many existing data protection architectures. This session breaks down modern VM data protection solutions, including VMware’s vStorage API for Data Protection, array-level snapshots and replication, and third party enterprise backup software solutions. Attendees will be exposed to common data protection pitfalls as well as successful blueprints for modern VMware data protection architectures. Chris Wolf has been architecting data protection solutions for enterprise virtualization environments since 2002 and includes an abundance of lessons learned and best practices drawn from real world implementations in this session.
You can vote for this session here.
Cloud Futures: The Infrastructure Authority
To realize the potential of private cloud, infrastructure must be capable of not just dynamically provisioning and optimizing systems, but also not violating any security, regulatory, or organizational policy constraints in the process. In many enterprise environments, dynamic IT consists of several disjointed solutions and oftentimes blind faith that policy, security, or regulatory constraints will not be broken. The bottom line – someone has to be in charge. The infrastructure authority (IA) is the future nerve center of cloud infrastructure as a service (IaaS) operations. Among the many roles the IA possesses are:
- Provides a central metadata store
- Leverages common data models to request or offer services
- Maintains physical, virtual, and policy dependency maps
- Ensures security and regulatory compliance
- Ensures that service level requirements are met
- Stores and enforces organizational policy
- Ensures accurate capacity forecasts
- Integrates with third party management and orchestration tools to authorize IT operations such as provisioning or relocation before they proceed
Typical questions answered by the IA include:
- Are security zoning rules checked before live migrating a VM?
- Do any policy restrictions prevent VMs from migrating to different data centers or to public cloud infrastructure?
This session takes a practical look at the emerging role of the IA, and details how existing management frameworks such as VMware vCenter and industry standards such as OVF can be used in this capacity moving forward.
You can vote for this session here.
If you’re attending Synergy this year and wanted to stop by one of my sessions, I thought I would post them here. I’m participating in four sessions at this year’s conference:
Geek Speak Tonight!
Date / Time: Tuesday May 11, 4:30 pm – 6:45 pm
Room:Moscone West Convention Center – Moscone 2000-2002
Speakers: Shawn Bass, Simon Crosby, Rick Dehlinger, Martin Duursma, Chris Fleck, Stephen Greenberg, Michael Harries, Alexander Ervik Johnsen, Harry Labana, Brian Madden, Brad Pedersen, Rene Vester, Chris Wolf
A series of open, lively discussions led by respected industry thought leaders will kick off Synergy, giving you a chance to start the conference with a breath of fresh, unfiltered air.
CTPs and CTOs on the future of desktops
4:30 p.m. – 5:15 p.m.
The desktop is primed for unprecedented change in the near future. Join a high-powered discussion and debate among CTPs and CTOs on exactly how that future will unfold.
Cloud computing: what makes it successful and what’s on the roadmap?
5:15 p.m. – 6:00 p.m.
Cloud computing is the term du jour in our industry. Where is the real activity happening? How will the cloud unfold over the next few years? How will the cloud affect you? Where are the opportunities for Citrix customers and partners? Join us for a discussion on cloud computing’s hot areas and risks, and what’s coming next.
XenDesktop 4: a new look at the VDI landscape
6:00 p.m. – 6:45 p.m.
Last year, Shawn Bass stirred the pot again by stating there were still some deficiencies in the various VDI solutions on the market. Twelve months have passed and Citrix has introduced XenDesktop 4. What does the VDI landscape look like today?
The Debate is Raging – Concurrent vs. User Based Licensing Models
Track: Geek Speak Live!
Date / Time: Wednesday May 12, 1:00 pm – 1:45 pm
Room: Solutions Expo Hall – Lounge
Speakers: Stephen Greenberg, Joe Shonk, Chris Wolf
With the advent of application virtualization and cloud computing solutions, licensing has become a key factor in project cost modeling and technical implementations. A clear understanding of the available options, and how and where to apply them, are critical. This panel of industry experts will present and debate the various licensing models and use cases, and contrast their views with current industry practices. Topics will include the available types of license models and their relative merits, common challenges posed by licensing in planning and implementing products and the experts’ views on industry standards and their feedback to the major vendors.
Server-hosted Virtual Desktops: What the Vendors aren’t Telling You
Session Number: SYN313
Track: Desktop Virtualization
Date / Time: Wednesday May 12, 4:30 pm – 5:20 pm
Room: Moscone West Convention Center – Moscone 2003-2005
Session Type: Breakout
Speakers: Simon Bramfitt, Chris Wolf
Many organizations are planning or implementing server-hosted virtual desktop solutions. In the emerging client virtualization market, it can be difficult to assess vendor platforms due to a lack of defined and accepted standards. In this session, Burton Group analysts Chris Wolf and Simon Bramfitt share the Burton Group’s benchmark for evaluating server-hosted virtual desktop solutions, including criteria for evaluating deployment, management, performance, integration, and user experience. The session concludes with a breakdown and scorecards of popular solutions, including the current Citrix and VMware products.
Heterogeneous Virtual Infrastructures – Practical Solutions for Managing Multi-hypervisor Environments
Session Number: SYN207
Track: Datacenter and Cloud
Date / Time: May 13, 2:00 pm – 2:50 pm
Room: Moscone West Convention Center – Moscone 2014
Session Type: Breakout
Speaker: Chris Wolf
While standardizing on a single virtual infrastructure sounds ideal, many enterprises face the reality of managing multiple virtual infrastructures both inside and outside their datacenters. Multiple virtual infrastructures may reside across business units (i.e., server and desktop), departments, or sites. In addition, client hypervisors (e.g., VMware Player, Microsoft Virtual PC, and Sun VirtualBox) further compound hypervisor and VM image management.
In this session, you will learn:
- Strategies, architectures, and tools for managing heterogeneous virtual environments
- Common interoperability pitfalls and workarounds
- About security, data protection and recovery
Over the past twelve months, the innovation in the client virtualization space has been pretty remarkable, and several solutions will be on display at next week’s Citrix Synergy conference. Because of the sheer volume of vendors on the show floor, I thought I’d point out vendors that may be under your radar but are worth visiting. Granted, there are literally hundreds of vendors in the space and most offer some value to your virtual infrastructure. Rather than point out the obvious ones that you’d visit anyway (e.g., Citrix, VMware, Microsoft, HP, Wyse, AppSense, Symantec, Quest Software, and RES Software), I thought I would point out some of the not-so-obvious. I know… get on with it, so here they are.
Virtual Bridges will be showing the 4.0 release of their VERDE suite. VERDE is a server-hosted virtual desktop solution that competes with products such as Citrix XenDesktop, VMware View, and Quest vWorkspace. What’s unique about VERDE is that its backend infrastructure has no Windows requirements, which has appealed to cost sensitive organizations and service providers. VERDE 4.0 has some interesting features such as CloudBranch, which allows organizations or service providers to support low bandwidth WAN connections by deploying a proxy-like caching server to a remote site. This approach allows organizations to retain centralized management while serving up virtual desktops over the remote office LAN – key to satisfying user experience requirements. I had blogged about this type of approach last year. There’s quite a bit in the core architecture, including Windows, Linux, and Mac endpoint support and even local desktop support (like VMware View offline desktops or MokaFive) that can be booted from a USB drive. For more information on VERDE, take a look at Gabe Knuth’s review on brianmadden.com.
Citrix recently made an investment in Kaviza and since that time (April) Kaviza has been getting considerably more attention. Kaviza offers a VDI-in-a-box solution that is complementary to existing desktop virtualization solutions. With Kaviza, organizations can deploy a single on-premise server to host virtual desktops at a remote site. The solution is hypervisor agnostic and currently supports VMware vSphere and Citrix XenServer. The solution gives organizations or service providers a way to deliver virtual desktops to remote facilities without having to worry about WAN connectivity impacting performance. Kaviza was one of the vendors that participated in my Virtual Desktop NAS Vendor Challenge last year and you can read more about their solution in this post.
RingCube was another participant in the Virtual Desktop NAS Challenge and has gotten considerable traction over the last twelve months (you can read their guest post here). Having large enterprises such as ING Bank using RingCube’s vDesk product in production, has helped to establish RingCube’s credibility. RingCube’s client-hosted virtual desktop solution allows users to run their virtual work space on their endpoint system, leveraging the local OS resources. This means that a separate VM isn’t needed. The vDesk architecture is closer to OS virtualization in its approach. Without the added overhead to run a separate full blown VM, organizations have liked the fact that they can use existing endpoint hardware (without having to upgrade memory, for example) for the vDesk solution.
Wanova was one other participant in the VD-NAS Challenge and their guest post is available here. Wanova leverages some very intelligent streaming technology they call Distributed Desktop Virtualization. The solution centralizes desktop OS and application management and can be used to deploy user environments to physical or virtual endpoints. Wanova’s solution also allows IT to support a single base image for a desktop OS type while also supporting user-installed applications.
Unidesk is an interesting company that started making noise at last year’s VMworld North America conference. Unidesk positions itself as a complement to VMware View and Citrix XenDesktop (you can read about their architecture here), and prides itself on its layering technology. With Unidesk, IT can manage shared non-persistent golden virtual desktop images while still giving users the ability to install their own applications. User-installed apps with non-persistent images is an extremely difficult engineering challenge (both VMware and Citrix will admit this), and Unidesk claims to have the answer. Their booth is definitely worth a stop when you venture into Synergy’s expo hall.
There will be plenty of attention devoted to the bare metal client hypervisor at Synergy. While folks wait out the general availability of the Citrix XenClient and VMware Client Virtualization Platform (CVP) solutions, they have the opportunity to look at a bare metal client hypervisor shipping today – Virtual Computer’s NxTop. Many of our large enterprise clients see the client hypervisor as a 2012 initiative, but that’s not to say the technology isn’t useful as a small business or department-level solution today. Also, even if your plans for client hypervisor are further down the road, it’s always good to begin building your knowledge base of the technology and to start thinking about the governance issues (e.g., treatment of personal user VMs on the corporate LAN) they create.
Server Virtualization and Cloud
Synergy is starting to pick up steam as a server virtualization and cloud event, and I didn’t want to ignore some of the innovative vendors in that space too. Vendor booths that I’ll be stopping by include:
Plenty Else to See
Like I said previously, there are plenty of other vendors that are bringing value to the industry. For example cruising by the McAfee, RSA, and Trend Micro booths is a good idea. All three vendors are brining considerable innovation to security and compliance in virtual server and desktop infrastructures. In the storage space, I recommend visiting the booths of the three winners of the Citrix Ready StorageLink Challenge: NetApp, HP, and GreenBytes.
Between the emerging solutions, excellent presentations, and always engaging hallway discussions, Synergy is shaping up to be a great conference. I hope to see you there. If I failed to recognize a particular product you find interesting, please post it as a comment.
With the postponement of Catalyst Europe, I had the opportunity to virtually attend the Microsoft MMS conference keynotes on Tuesday and Wednesday of this week. MMS has long been one of Microsoft’s best conferences, and this year didn’t disappoint. I’m not going to rehash the major announcements, but you can read the full details in the following Microsoft System Center team blog posts:
- System Center Service Manager 2010: An Integrated Platform for IT Service Management
- Configuration Manager vNext: User Centric Client Management
- Mobile Device Management with Configuration Manager vNext
- Configuration Manager 2007 R3 Beta released
- What’s coming up with the next versions of SCOM and VMM!
- User Centric, and System Center Configuration Manager vNext
- MMS 2010 Kicks Off in Las Vegas
Tuesday Keynote – Bob Muglia
- Bob opened by stating that Microsoft’s has been building dynamic IT management for the last seven years as part of their Dynamic Systems Initiative. Microsoft is essentially underlining the fact that they are not newcomers to dynamic IT and cloud and is playing on its strength in systems management.
- Bob highlighted the need for standard service models, and I agree. I started discussing this topic with vendors in 2008 and blogged about standard models in the security context early last year. I recently discussed this issue in my posts on metadata standards and the infrastructure authority. Still, vendors need to move beyond talk about standards for service delivery, metadata, and application interfaces, and deliver them. Mobility to and among cloud infrastructure-as-a-service providers requires these standard models. It’s time for vendors to show their hands, even if they’re holding proprietary service delivery models, metadata sets, and interfaces today. There are far too many competing interests to expect vendors to agree on an industry standard any time soon. Still, progress is being made on the standards front. SNIA’s Cloud Data Management Interface (CDMI), the DMTF’s Open Cloud Standards Incubator, and the Cloud Security Alliance’s work on standard cloud security models are three good examples.
- It would be nice if Microsoft would offer a complete set of documentation on how to stage their on-stage demos. The orchestration practices that were demonstrated are of high value and Microsoft should share the configuration information with their clients.
- Microsoft’s demos were Microsoft-centric, as expected. I would like to see Microsoft demonstrate integration with third party management products, which would strengthen their position on interoperability. Most Gartner clients are not homogenous Microsoft shops; demonstrating orchestration capabilities across multi-vendor management stacks would speak to the needs of the typical enterprise organization. If Microsoft doesn’t want to do this at a conference, then why not offer this information online?
- I thought Microsoft made a great move in acquiring Opalis, and liked seeing the Opalis integration and System Center Service Manager 2010 shown on-stage.
- Microsoft demonstrated long distance VM live migration, and in the process Muglia took a swipe at VMware, noting that moving VMs to new sites requires deep integration and validation across all management services. In the demo, Microsoft was able to show processes such as validating that a recent backup was completed before allowing the inter-site live migration to continue. While the demo was impressive, I would have been even more impressed if Microsoft validated the recent backup by integrating with a third party tool such as NetBackup.
- Microsoft is talking cloud using the terms “shared cloud” and “dedicated cloud.” There are so many disparate terms out there for cloud that pretty soon Rosetta Stone will release a CD on speaking cloud. The Gartner/Burton teams have been working closely on defining a core set of cloud terminology, and its important for vendors in the space to adopt common definitions.
- Edwin Yuen demonstrated System Center Virtual Machine Manager (SCVMM) vNext, which will include drag-and-drop capabilities for deploying multi-tier applications. The demo was powerful, but my existing concerns about SCVMM were unanswered. Today the product is not extensible, and it does not support the Open Virtualization Format (OVF) industry standard; I’m hoping those two features make it in to SCVMM vNext.
- Microsoft’s demo of cloud service management looked solid from the administrator’s point of view, but nothing was shown from the consumer’s point of view. IT service delivery requires the presentation of services to consumers using intuitive interfaces that the customer understands. Microsoft has yet to show a consumer-centric view of how consumers will interact with Microsoft cloud service management.
Wednesday Keynote – Brad Anderson
- Brad opened by talking about how the Windows 7 release was the most significant event in the desktop space in a very long time. I would counter that equally significant was Microsoft’s announcement to end-of-life (EOL) Windows XP in April 2014. The XP EOL announcement put IT organizations “on the clock” to replace their existing client endpoint OS, and in many cases re-architect all major aspects of user experience and application delivery.
- There was a good discussion about power management, but one interesting area of research that was not mentioned was Microsoft’s work on in-guest VM power management. Take a look at Joulemeter for more information.
- I liked hearing Brad talk about the future desktop representing a convergence of services. This is a concept I recently discussed in the post “The Next Gen Desktop’s Cloudy Future.”
- I saw a bit of irony seeing Microsoft discuss Hyper-V R2 SP1’s dynamic memory feature on stage. A year ago Microsoft was solidly against VMware’s memory overcommit feature, which allows VMs to share a physical pool of memory on a server. Jeff Woolsey did a nice job describing Hyper-V’s dynamic memory capabilities in the following posts:
- Microsoft demonstrated the RemoteFX technology that was acquired from Calista. It will be interesting to see how quickly Microsoft’s IHV partners offer a shipping solution. Several have stated their intent to support the technology.
- Microsoft demonstrated their new Windows InTune product – a cloud service for managing PCs. While I like where Microsoft is taking PC management, I’m still disappointed that they have yet to address desktop OS licensing for cloud-based desktop-as-a-service (DaaS) deployments. Device-based desktop OS licensing is incompatible with the on-demand and device-agnostic nature of cloud service delivery, and Microsoft needs to address this issue sooner rather than later.
- I was disappointed by the System Center Service Manager demonstration on compliance validation. The demo included no mention of virtualization or virtual infrastructure, which is the default x86 application platform of many of our clients. If the product is not providing controls and validation capabilities for multi-tenant VMware vSphere, Microsoft Hyper-V, and Citrix XenServer environments, then it is not ready for prime time.
Overall I was very impressed with the conference keynotes. System Center Service Manager and Microsoft’s increasing integration of the Opalis software are two areas to watch. Muglia’s talk about standard service delivery models also leads me to believe that Microsoft is poised to aggressively go after the cloud provider space. The release of Microsoft’s Dynamic Infrastructure Toolkit and growing number of partners in Microsoft’s Dynamic Data Center Alliance (DDA) is proof of that. What did you think of MMS 2010? I’d love to hear your thoughts.
There are some physical things in life that I like to associate with as part of my identity. My Jeep Wrangler is one of them. My laptop is not. How I live and work is not defined by a physical compute device, but rather by my online identity. Email, Twitter, accessing documents and meeting notes on SharePoint, tracking client engagements in Salesforce, evaluating and testing virtualization solutions in my lab – those are all part of my day. My PC? It’s just what connects me to my online work space and personal space. Sorry Latitude D820, but you mean nothing to me.
From talks with colleagues and clients, I know I’m not alone. My device doesn’t define me, and certainly doesn’t hold all the data and applications I need to do my job. So what does all this have to do with Microsoft licensing? Well in my opinion, they’re getting it.
I’m not going to rehash much of the excellent commentary already out there on yesterday’s Microsoft announcement. See these posts for more background information on the Microsoft announcement:
- The sleeping giant awakes – Microsoft gets desktop virtualization right (Simon Bramfitt)
- Microsoft announces changes in desktop/server virtualization and VDI strategy – UPDATED (Alessandro Perilli)
- Microsoft VDI Inertia (J. Tyler Rohrer)
As Simon noted in his blog, VECD licensing is going away and the right to run a desktop OS as a server-hosted virtual desktop is now included with Microsoft’s Software Assurance (SA). For devices not covered by SA, organizations can purchase Virtual Desktop Access (VDA) licenses at a cost of $100 per device per year. Furthermore, Microsoft’s license transfer restrictions still apply. So if you want to license a virtual desktop for external contractors, you can assign a VDA license to each contractor system. If a contractor completes a project and leaves, you can re-assign the license to another contractor’s device (you just can’t reassign the license more than once per 90 days).
Simon also mentioned that “extended roaming rights” is the big deal in the announcement, and it is. While not perfect (Simon describes the issues), it’s a step toward licensing a desktop for a user and not a device (sure technically we’re still talking about device licensing, but the user can access his desktop from a myriad of personal devices). So let’s call it an alpha release of per-user licensing. Does a per-user model solve all of our problems? No. But organizations want it offered as a choice, and it’s good to see that Microsoft is listening to their customers.
Looking past the good news that came out of yesterday’s announcement, considerable work remains. Microsoft has still not addressed the service provider market. Considerable clarity is still needed for licensing virtual desktops on shared infrastructure. For example, if a user needs a Windows desktop for a week, he essentially has to pay for 90 days worth of licensing. Why? Even with VDA, the service provider technically has to associate the VDA with the subscriber’s physical device and can’t transfer it for another 90 days. The result is that desktop-as-a-service (DaaS) is far more costly than it should be. This problem will grow once companies like HP, IBM, and Dell offer client hypervisors, and look to offer services where user desktop VMs are automatically replicated from their personal system to the cloud. Again, this takes us back to a physical device not defining the user. For the IHVs, they get the opportunity to sell additional services to make up for the low margins they see on hardware sales. Sooner or later Microsoft will have to address this issue, and let’s hope it’s sooner.
On the support side, Microsoft’s internal application teams need to step up and offer clear support statements for the leading client and application virtualization platforms. Officially supporting App-V would be a nice first step. The push for Microsoft client applications to fully support the major client virtualization solutions must come from the top. I’m hopeful that Microsoft’s key executives will make that push.
Finally, let’s not forget that even with SA, Windows Server OS licenses cannot be virtualized (without mobility restrictions). Instead, most large enterprises have to upgrade to Data Center edition licenses (for practical purposes) for the sake of virtualizing. I talked about this issue extensively in this post, so I won’t repeat the details here. If lifting licensing constraints for client virtualization is good, I’d argue that doing the same for servers would be even better, especially if you look at the amount of servers already virtualized today.
Microsoft customers – you’re voice is being heard. Now’s a great time to pat Microsoft on the back. However, it’s not time to back off. Keep communicating your licensing needs to Microsoft. It’s clear that they are listening, and taking steps to make your life easier.
I will be in Tampa (April 7th) and Orlando (April 8th) next month hosting a free three-hour Gartner/Burton Group virtualization seminar. If you are interested in attending, here is the session information and registration details.
Burton Group senior analyst Chris Wolf will break down today’s most pressing challenges affecting client and server virtualization. Regardless of whether you have a mature virtualization deployment or are just starting down the path toward virtualization, this seminar will have something for you. In three fast-paced hours, the following topics will be covered, with ample time afforded to Q&A and interactive discussion.
Desktop Virtualization: What the Vendors Aren’t Telling You
In the emerging client virtualization market, it can be difficult to assess vendor platforms due to a lack of defined and accepted standards. In this session, Chris Wolf shares Burton Group’s benchmark for evaluating server-hosted virtual desktop solutions, including criteria for evaluating deployment, management, performance, integration, and user experience. The session concludes with a close look at the factors that make or break the virtual desktop ROI, and key 2010 initiatives organizations should pursue to move from a device-centric to a user-centric application delivery model.
Attendees will leave this session with:
- Practical guidance and industry best practices for getting started with client virtualization
- Strategies for modularizing physical desktops to ease an eventual transition to virtual desktops
- Insight into architectural decisions that make or break the ROI case
- A detailed list of evaluation criteria to use in request for information (RFI) and request for proposal (RFP) documents
- Insight into factors that differentiate features that appear similar on vendor data sheets
Server Virtualization, Mobility, and Cloud: New Beginnings
In 2010, server virtualization has continued to cement its place as the default platform for x86 server applications; however, server virtualization has yet to reach the panacea as the modern day mainframe touted by vendors. In this session, senior analyst Chris Wolf explores the current capabilities of server virtualization platforms in support of infrastructure as a service (IaaS), while focusing on product and infrastructure shortcomings that have prevented virtual infrastructure and cloud-based IT service delivery from reaching its full potential.
Special focus will be devoted to the following topics:
- Best practices for security, performance, and resiliency
- The practicality of supporting multi-tenant environments on shared physical infrastructure
- Shortcomings in virtual infrastructure metadata and programmatic interfaces that are holding back innovation
- Practical steps for building self-service delivery
- Feasibility of public cloud IaaS offerings and their impact on long term IT infrastructure decisions
For questions or registration information for the Orlando seminar, please contact Matt Hanson at firstname.lastname@example.org.
For questions or registration information for the Tampa seminar, please contact John Walters at John.Walters@gartner.com.
I hope to see you there!
I was recently asked to judge the Citrix StorageLink Video Challenge and will serve as an independent voice on a panel that includes Citrix’s Simon Crosby and John Fanelli. I have to admit that it was pretty smart of Citrix to place an analyst on the panel of judges. Now if Citrix’s vendor partners don’t like the results, they can just blame me.
Anyway, the video challenge is very interesting. Storage vendors put together videos to demonstrate the value of their technology to the Citrix StorageLink product line. The user community will vote for the most innovative video, and the panel of judges will dole out awards in the following categories:
- Best storage for desktop virtualization deployments
- Best storage savings (TCO)
- Best performance
There is very good participation in the contest, and videos were submitted from the following vendors:
I’m always interested in community input, so you have an opinion on any of the award categories, please feel free to post it as a comment or send it privately to me through my contact page. Voting closes on April 18th, so vote and send me any feedback that you have soon.
Over the past few years, I have talked with several dozen Burton Group clients who are struggling with defining their next generation desktop and application delivery architecture. They often like the idea of the server-hosted virtual desktop, but not the cost. In addition, many of our clients are increasingly looking at cloud-based application delivery frameworks such as software-as-a-service (SaaS) and platform-as-a-service (PaaS). For example, several of our clients use the Salesforce.com customer relationship manager (CRM) SaaS-based application. The result- users get a rich application assessable from anywhere with a web browser, and IT sees a low total cost of ownership (TCO) for the CRM application. Other Burton Group clients have evaluated Microsoft Exchange via SaaS services, while others are keeping an eye on PaaS offerings such as Microsoft Azure.
Besides SaaS and PaaS, infrastructure-as-a-service (IaaS) is increasingly growing in popularity. One of the most common ways to deliver IaaS is by leveraging hardware-infrastructure-as-a-service (HIaaS) platforms (e.g., VMware vCloud, Amazon EC2, or Citrix Cloud Center). For the majority of our clients, their initial entry into HIaaS has started by building private clouds to host applications in virtual machines. HIaaS as a backend for desktop-as-a-service (DaaS) is on the radar of many of our clients. For several, 2010 plans include virtual desktop pilot projects, and small deployments by the fall. Note that while I’m being relatively light on the definitions, you can read Burton Group’s detailed perspective on cloud in the following free report “Cloud Computing: Transforming IT.”
If you’re wondering “What’s the point?” here it is. Application delivery does not have to begin and end at the virtual desktop, and in many cases will not. SaaS and PaaS services will increasingly play a role in delivering applications to end users. Presentation virtualization technologies such as Citrix’s XenApp will do so as well. XenApp as the delivery mechanism for internal SaaS, combined with the Citrix Receiver, for example, provides the framework to publish Windows applications to a variety of endpoints (e.g., notebook, netbook, iPhone, iPad, thin or zero client, and thick client). So in the end we’re winding up with several layers of application services that need to be seamlessly delivered to the end user. This means that security policy enforcement and identity management, for example, will need to traverse each service layer. For most organizations today, leveraging SaaS applications requires users to maintain a separate login for each provider. Identity federation in support of single sign-on access to cloud services will be a key enabler in the delivery of converged cloud services. Others (e.g., Microsoft and Novell) have tried and failed in the past, but this time the stakes are different. Strong interest in cloud services provides the use case waiting for a solution.
If we take the delivery of converged cloud services to the client endpoint, we get to what should be a divide between two user experience domains: personal space and work space. The endpoint device may include a client hypervisor to securely separate both personal space and work space, as shown below.
Granted, what I’m talking about here isn’t revolutionary. Many vendor examples relating to a bring-your-own-device delivery model highlight the need to separate personal space and work space, but they fall short in their inclusion of other relevant cloud application delivery services. In fact, I blogged about this approach a year ago. Independent analyst Brian Madden went a step further and predicted that 90% of virtual desktops will run on client endpoints.
To summarize, we need to keep the focus of application delivery on the application. If a call center’s application delivery requirements is best suited by a low-end device that uses a web browser to present applications to users via SaaS, then so be be it. If the application delivery requirements warrant a server-hosted virtual desktop, then that’s OK too. Still, in my opinion, IT’s future is about managing each user’s work space, and we should be looking at technologies that simplify delivery and presentation of converged cloud services. The winning vendor, and the one that drives a user’s work and/or personal space, is the one that nails the presentation of converged cloud delivery. I’m not sure who the winner will be, but I know that the winner won’t be the vendor going after this problem with a narrow view of the typical enterprise’s application delivery requirements. What do you think? We will be talking about these topics at Catalyst Europe in Prague next month, and I hope to see you there.
Your notion that many customers are just priced out of desktop virtualization and that much of the cost and complexity with VDI (sever-based desktop virtualization) has to do with it being layered on top of server virtualization was precisely the driving factor behind the genesis of Kaviza. Our mission has been to drive down the cost & complexity of VDI and provide “VDI for the rest of us”. What is even uncannier is the similarity in names. You call your notion “virtualization infrastructure in a box”. We call our product we launched recently, “VDI-in-a-boxTM”.
The philosophy behind our approach is as follows:
- Make the solution extremely simple. Package everything as an appliance, load it on a server with a hypervisor and that’s it — you now have a fully functional, self-contained virtual desktop server appliance with everything you need to manage templates, create, provision, load balance desktops and login users.
- Make the infrastructure cost effective and require nothing other than commodity servers with direct attached storage to manage and provision the desktops. That means no shared storage or high speed interconnects that jack up costs and cause central bottlenecks.
- Ensure that the system can be grown on-demand easily without requiring lots of manual activity, capacity planning and so on.
- Provide a higher level of abstraction where the management is about desktops, users, templates (golden images) — stuff the desktop IT staff cares about as opposed to virtual machines, server pools and virtualization details.
- Don’t re-invent the wheel and leverage best of breed components seamlessly. We are protocol agnostic and hypervisor agnostic. We leverage Active Directory/LDAP for user management. We tie in with application streaming solutions like AppV seamlessly (for those who want it) and we can work with Active Directory’s roaming profiles or personalization modules from AppSense or RTO for those who want user personalization.
How did we do it and what’s the architecture?
As shown in the figure below our solution is distributed and consists of one or more servers (you need at least two if you want high-availability otherwise you can use just one) each running a hypervisor and our Kaviza Manager (aka kMGR) virtual appliance.
The kMGR appliances on each server communicate and work together to
- Run the desktops
- Ensure there are redundant copies of key data so there’s no single point of failure
- Dynamically and automatically incorporate new servers
- Detect and dynamically recover from server failures
- Simplify management by allowing the administrator to manage the solution as if it were one logical server
- Grid engine: This module communicates with all the other kMGR’s grid engine to ensure that there is a cohesive, coherent grid of servers. It manages the communication and ensures there is ONE global notion of the state of the grid. The grid engine module creates a hot-pluggable grid and enables servers to be added and subtracted on demand. Servers are added by simply answering 2 questions and providing authentication to join the grid. The kMGR then ensures the newly added server is provided with all the needed configuration and template information to participate in the grid. Similarly, when a server is removed or fails, the grid engine detects the missing server and automatically ensures that other servers take up the slack.
- Logical shared storage: This module ensures that all key information such as user information, desktop configuration information and the templates (the golden images from which desktops are created) are copied to other servers in the grid to ensure that there is no single point of failure.
- Load balancer: This module load balances the desktops across the grid to ensure optimal use of the grid resources.
- Template management: This module provides the tools to manage the lifecycle of the templates which contain golden images of the OS and application, CPU and memory specification of the desktops created from it and policies that dictate when desktops are regenerated from its template. The template module uses “linked clones” where multiple desktops are generated from a base golden image to save storage.
- User management: This module ties in with Active Directory or an LDAP server and ensures that users have authorization to use a desktop and manages all user sessions.
- Provisioning engine: This module works with the others and does the detailed work of provisioning and generating desktops across the grid based on authorization and policies set by the administrator and directives from the template load balancing modules. Administrators do not have to manually provision, load balance or manage capacity.
There’s been plenty of fanfare surrounding the release of Citrix XenDesktop 4.0 this week. I plan to blog about the XenDesktop 4.0 technical features after I’ve spent some quality time with them in my lab over the next couple of weeks. However, I’d like to contribute to the XenDesktop 4 conversation by talking about its impact on client software licensing.
It’s no question that user expectations for how they should access corporate resources are changing. Sometimes I need to quickly grab or view a doc from my iPhone. Other times I need to view resources on my laptop while not connected to any network. And of course, most times I’m in my office and can access my applications while fully plugged in. As I see it, I’m just scratching the surface. Thin or zero clients, netbooks, and remote Internet kiosks (such as at a hotel or conference) are increasingly becoming part of the the application access picture.
So what does this all mean? Users just want to get at their data and applications on their terms. IT should be able to provide that level of service, and from a technology perspective we’re getting there. Sure I’d like to see capabilities for deploying endpoint security to devices such as iPhones before considering them enterprise-ready alternatives for application access (and a Bluetooth keyboard wouldn’t hurt either). However, for all the gains we’ve had in technology, client application vendor licensing and support still remains one of the primary barriers to wide scale desktop virtualization deployments.
Microsoft, for example, still licenses desktop OSs and applications by device, or installed instance. Sure the Vista Enterprise Centralized Desktop (VECD) licensing model includes a “home access” provision that allows users to access their virtual desktops from their home computer. However, this licensing model still expects the organization to count devices. I’ve talked to Microsoft about my concerns over per-device licensing and for the most part, we are in full agreement that Microsoft desktop OS and application licensing will need to fundamentally change. However, for an organization as large as Microsoft, this is going to take some time. Any licensing change has a huge impact on existing OEM and sales channels, and is why licensing changes are often incremental in nature.Still, Microsoft has an opportunity to lead the way and show other vendors how to license software for today’s increasingly mobile user. I hope they embrace that opportunity.
The user experience is moving toward an era where user data and applications live in the cloud. In other words, the cloud is their desktop. Sure this could be a server-hosted Windows XP or Windows 7 instance or something different (e.g., client-hosted desktop, virtual applications, or a mix of applications and services delivered by internal IT and external PaaS or SaaS providers). The bottom line – the way we deliver applications and services to users is fundamentally changing. Now is the time for the vendors to define policy that meets the needs of how their users will access applications. I believe the best model for the emerging virtual desktop and application delivery methods to be a per-user model. The point of virtualization and cloud is to abstract (or decouple) the physical dependencies of IT services and applications. We’re very close to being able to seamlessly achieve this with technology. With many organizations planning major desktop virtualization rollouts in 2010, it’s time for application vendors to rework their licensing models. I am advising clients to draw a hard line on licensing requirements when they put prospective vendors through the RFP process, and I advise you to do the same (i.e., require per-user licensing).
Reworking licensing to be user-centric needs to be a top priority among client application vendors heading into 2010. Vendors that insist on binding licenses to physical devices in an increasingly virtualized world are not part of the solution. They’re part of the problem.