Archive for category Storage
I was recently asked to judge the Citrix StorageLink Video Challenge and will serve as an independent voice on a panel that includes Citrix’s Simon Crosby and John Fanelli. I have to admit that it was pretty smart of Citrix to place an analyst on the panel of judges. Now if Citrix’s vendor partners don’t like the results, they can just blame me.
Anyway, the video challenge is very interesting. Storage vendors put together videos to demonstrate the value of their technology to the Citrix StorageLink product line. The user community will vote for the most innovative video, and the panel of judges will dole out awards in the following categories:
- Best storage for desktop virtualization deployments
- Best storage savings (TCO)
- Best performance
There is very good participation in the contest, and videos were submitted from the following vendors:
I’m always interested in community input, so you have an opinion on any of the award categories, please feel free to post it as a comment or send it privately to me through my contact page. Voting closes on April 18th, so vote and send me any feedback that you have soon.
Yesterday RSA announced new controls for virtual infrastructure security in cloud environments. Concerns regarding security and compliance have been primary factors preventing large enterprises from placing production workloads on shared virtual infrastructure in the cloud. Yesterday’s announcement and proof-of-concept didn’t solve all of public cloud’s security woes, but it brought us closer to a practical solution. In case you missed it, you can read a detailed overview of the solution in the RSA security brief “Infrastructure Security: Getting to the Bottom of Compliance in the Cloud.” Even if you’re not ready for public cloud, many of our clients have expressed concerns over mixing security zones or subzones on internal private cloud infrastructure. Instead of supporting multi-tenancy (i.e. multiple departments traversing multiple security boundaries), the conservative IT organization isolates security zones using dedicated physical infrastructure (e.g., separate physical clusters, network ports, and storage). Even if you build in security controls in the virtual infrastructure, how do you expose them to the auditor? To date, that has been a problem.
In the past, I have talked about this security dilemma in a couple of couple of key areas. First, we need a standardized set of cloud isolation levels. We also need standard metadata (either de facto or industry standard) so that third party audit tools can properly query an application’s relationship to cloud security policy in relation to virtual and physical controls that are in place. I covered those issues in more depth in the post “The Cloud Mystery Machine: Metadata Standards.” In addition, virtual resources need to be able to answer the question “Where are you?” That applies to both the runtime location and data location. It’s important to ensure that data privacy and governance concerns are met, and regulatory compliance issues such as data export restrictions are satisfied. Ideally, the answer to the question will provide details on the hardware root of trust (the hypervisor and physical infrastructure is secure), relationship to defined pre-defined security tiers (the RSA POC uses “platinum,” “gold,” and “silver",” and “bronze,” for example), and provides the detail needed to prove that both data and application runtime security requirements are satisfied.
Rather than summarize all of the goodness in the RSA announcement, I’ll focus on the areas where it still falls short. For starters, neither EMC nor Cisco were part of the POC. So the existing model does not detail concerns such as data location and the privacy of data at rest. Naturally, there is quite a bit that you would expect Cisco to offer too. The Nexus 1000V has plenty to offer when it comes to security inspection and enforcement on shared virtual infrastructure: L2-4 ACLs, SPAN, ERSPAN, AAA, and more. Naturally, any de facto tiered security models offered by RSA and its partners should go as far as to include advanced network and storage requirements, and I expect them to do so over time.
Now that RSA, VMware, and Intel have taken this big step toward satisfying the security concerns associated with shared infrastructure-as-a-service (IaaS) architectures, it’s time to be transparent on metadata structure. If each service provider builds its own proprietary metadata schema, we’re in trouble. Instead, vendors such as VMware need to define a more robust metadata schema within the .vmx configuration file. In a perfect world, VMware would toss .vmx to the side and work with the DMTF to take the XML-based .ovf standard from a standard for VM importing to a standard for runtime metadata. If we had that, RSA, VMware, and Intel can continue on their current path, and third party vendors could add their own custom controls as well. In addition, the standard could be applied to all hypervisors, such as Hyper-V and XenServer.
While I expect this announcement and forthcoming innovations to be a boost to public cloud providers, the work of RSA, VMware, and Intel will pay immediate dividends for each organization’s internal cloud plans. The more compute resources that can be shared, the lower the capital and operational expenses to run the data center. Solutions that enhance visibility, improve security, and create opportunities to share more physical infrastructure are no-brainers, in my opinion. I could spend much more time discussing the details of the RSA POC, but I’ll leave that for the RSA white paper. Also, if you would like to hear more about where this solution is going, I encourage you to attend Catalyst Europe next month. RSA CTO Bret Hartman will detail RSA’s vision for cloud security at the conference, and will be on-hand to answer your questions as well.
At Catalyst North America last week I talked about the concept of a virtual desktop NAS and wanted to provide a little more detail. When most folks think about desktop virtualization, a deployment model based on virtual infrastructure is often what first comes to mind. Both VMware View and Citrix XenDesktop, for example, rely on server virtualization for their back end infrastructure, which requires hypervisors (e.g., ESX, Xen, Hyper-V), physical servers, networked storage, and associated Ethernet and storage connectivity. Of course, that’s not to mention the software needed to manage the virtual infrastructure as well. Don’t get me wrong. I’m not trying to bash the existing desktop virtualization deployment models of vendors such as VMware, Citrix, and Microsoft. In fact, I’ve recommended desktop and application virtualization solutions from all of those vendors on several occasions.
That all being said, I’ve also talked with many clients who are simply priced out of desktop virtualization by the infrastructure requirements and related up front capital expenses. This is especially true with small and medium businesses, which is where I see desktop virtualization appliances as having an opportunity to fill a major void. Consider this example.
Sure – the illustration is overly simple, but that’s my point. Suppose you could drop in a single box that serves up virtual desktops, while providing much of the centralized image management you want to get out of desktop virtualization? That’s how I see a virtual desktop server appliance, or virtual desktop NAS (call it what you like… I’m not a marketing guy) playing out. Virtual infrastructure in a box is nothing new – and this model could certainly go in that direction with ESX, Xen, Hyper-V, or KVM hypervisors – but I see another option that involves leaving the virtual infrastructure behind. Why not serve up virtual desktops over CIFS or NFS? iSCSI would be an option too. That means all you really need is a basic Windows or Linux file server and you’re there. Microsoft made a reasonable dent in the NAS space with Windows Storage Server and there’s no reason to think that a desktop virtualization appliance couldn’t have a greater impact.
To see what I’m getting at, consider this example. Take a Windows Server 2008 host and drop a virtualization product like RingCube’s vDesk on it. vDesk can serve up virtual desktops over CIFS to Windows client endpoints. The desktops execute locally on the endpoint device where a generic Windows OS is present, such as XP. The virtual desktop container includes a virtualized Windows shell (including domain membership), applications, and the user environment. Virtual desktop updates are centrally managed on the Windows server. Since the desktops are served up over CIFS, you could use existing Windows file replication tools to replicate the environment to an alternate site. So a typical VAR might drop in a local desktop appliance to a small office, and configure it to replicate to the VAR’s data center for backup and disaster recovery. Large enterprises could use a similar model. RingCube is one example, and other vendors such as NComputing, MokaFive, and Virtual Bridges are enablers for the virtual desktop NAS as well.
Of course, you need reliability, and you can get it without clustering and shared storage by using one of a myriad of file server replication utilities. Another alternative would be to package the appliance using a Stratus FT server. The server OEMs did well with Windows Storage Server, but we’re looking at a desktop virtualization market that is larger by several orders of magnitude. I don’t think it would be difficult for vendors such as Dell, HP, IBM, Sun, or Fujitsu to package an appliance that included an OS and one of the desktop virtualization packages that I mentioned. Heck, even Microsoft could jump in and create a platform similar to Windows Storage Server – perhaps Windows Desktop Server or something similar. You get the idea. Again – I’m not here to say that the virtual desktop NAS will overtake the Citrix or VMware solutions, but like we’ve seen with NAS and storage, I think there could be a considerable market for the virtual desktop NAS.
What do you think?
Event: TechMentor Orlando 2009
Dates: June 22-26, 2009
Location: Orlando, FL
- T12 – Integrating Hyper-V and System Center: Recipes for Practical Automation
- T16 – Understanding Software Licensing in a Virtual World
- W4 – ESX Server Performance Tuning and Optimization
- W8 – Architecting Backups in Virtualization
- W12 – Platform Wars: Choosing Your Hypervisor
If you are using or are thinking about deploying NPIV in your ESX environment, you should take a look at William Lam’s NPIV discovery scripts. During my NPIV session at VMworld, one of the attendees asked if there was an easy way to collect a list of all virtual ports (WWNs/WWPNs) in a given ESX cluster. I said there wasn’t but didn’t think scripting a solution would be that hard and asked that if a member of the audience developed a solution, that he or she pass it along. Two days later, William Lam and his colleague Tuan Duong developed two scripts that solved the problem. One is for ESX and the other for ESXi. The scripts will output each VM’s assigned virtual WWN and WWPNs for a given ESX cluster and as a result solve a major pain point that exists today with managing NPIV in ESX environments.
I was planning to test the scripts in my lab before sharing them online, but my recent travel schedule has made that impossible. I’m planning to test the scripts when I get a 10 day lull between trips at the end of the month, but didn’t want to wait on sharing these treasures in the mean time. I hope you find them useful. You can download the scripts here.
Last week while speaking at TechTarget’s Advanced Enterprise Virtualization seminar, I was asked a question I get quite often – “What book do you recommend if we want to learn more?” The answer to that was easy. The audience consisted pretty much of senior level administrators who were either running or planned to deploy VMware-based virtual environments, so I asked if everyone had purchased a copy of the VMware Infrastructure 3 Advanced Technical Design & Advanced Operations Guide. The attendees were surprised that I didn’t mention my own book, but why should I? My virtualization book was published in 2005, so it’s a dinosaur in terms of virtualization books. Even back then, I wrote a good virtualization book that covered many platforms, but at the time the best book for ESX environments was the VMware ESX Server: Advanced Technical Design Guide. I’m out of the book writing business, so I’ll point people to articles I’ve written and my free virtualization overview published by Burton Group in 2007, Let’s Get Virtual: A Look at Today’s Server Virtualization Architectures. When it comes to books, I’d rather have people spend their money wisely on what I feel are the best ones out there.
I had pre-ordered the VMware Infrastructure 3 Advanced Technical Design and Operations Guide and received my copy from Amazon a few days before my seminar last week. If you’re thinking about deploying VMware or are already running VMware Virtual Infrastructure, I consider this book to be a requirement. The authors, Ron Oglesby, Scott Herold, and Mike Laverick are three of the foremost VMware experts in the world. Together, they delivered a highly comprehensive book that takes you from planning and architecture to operations and advanced management. Let’s face it, you can find a lot of information online today, so to me the value of a good book is in the information that goes beyond what is already there in a vendor’s how-to guide. This book certainly does not disappoint. Of course, some of the book’s content is online, like Mike Laverick’s excellent how-to on PXE installing ESX, but that’s no reason to forgo this treasure. There’s a lot to say about having all of your go-to information in one place, and this book is it.
The book weighs in at over 800 pages, and unlike other technical books, size does not equal fluff. The authors are very to-the-point and clear in their explanations, and I’m sure likely struggled with having to draw the line on content. The size is also due to the fact that it is two books (Advanced Technical Design Guide and Advanced Operations Guide) packaged as one. By packaging this way, you’re saving money. I wanted to name my favorite chapter, but found this impossible, because all of the chapters contain excellent information. That being said, here’s a list of my personal favorites:
Advanced Technical Design Guide
- Chapter 4 – Virtual Center and Cluster Design
- Chapter 5 – Storage
- Chapter 6 – Networking Concepts and Strategies
- Chapter 7 – VMs and VM Selection
- Chapter 8 – Managing the Environment
- Chapter 10 – Recovery and Business Continuity
Advanced Operations Guide
- Chapter 2 – Networking
- Chapter 3 – Storage
- Chapter 10 – VMotion, DRS, and HA
- Chapter 11 – Backup and VMware Consolidated Backup
- Chapter 12 – ESX Command Line Configuration
Each chapter is loaded with tips, tricks, and gotchas founded on real experience. In fact, many of the gotchas that I’ve run into myself were right there in print, and the authors highlighted a few that I have yet to see. You’ll find that having this book is like having an extra VMware consultant on staff. It’s that good.
So if you haven’t bought the VMware Infrastructure 3 Advanced Technical Design & Advanced Operations Guide yet, it’s time. Even if your department doesn’t have the $37.77 that the book is currently selling for on Amazon.com, just ask your worst dressed IT guy to stand outside the building with a cup. I’m sure he’ll have the money in a couple of hours. Bottom line – this book is a must-have for any IT pro responsible for designing, deploying, or managing VMware environments.
Apparently I missed quite a bit of news while I was away on vacation. I’m now getting back in the swing of things and one article I really enjoyed with Beth Pariseau’s piece Friction grows between storage vendors, VMware. I thought Beth did a really nice job tackling this issue, and the article includes some really good perspectives from VMware’s Jon Bock, Symantec’s Bruce McCorkendale, and FalconStor’s Bernie Wu.
Last Wednesday QLogic announced what appeared to be a very impressive benchmark – QLogic Achieves Near-Native Fibre Channel I/O Performance On Windows Server 2008 Hyper-V. By near native performance, QLogic highlighted throughput of nearly 200,000 IOPs. Naturally such a high throughput in a virtualized environment caught my attention. The announcement was timed to go along with the Hyper-V RTM announcement and immediately validate storage I/O performance of Hyper-V connected to SAN storage using QLogic 8Gb fibre channel host bus adapters (HBAs). I’ve always liked benchmarks if they can set relative expectations for how a particular configuration will perform in a typical environment. When the environment is far from typical, I consider the benchmark either an academic exercise (let’s see how far we can push this thing, regardless of how unrealistic the configuration may be) or a crafty attempt at product marketing. If I was to place this particular benchmark into one of Nik Simpson’s benchmarking categories, I’d have to say it falls into the benchmarketing category.
The QLogic press release included the following quote from Microsoft’s Mike Schultz:
QLogic’s benchmark result surpasses the existing benchmark results in the market, and demonstrates that Windows Server 2008 Hyper-V customers can achieve higher server utilization rates and consolidate servers with great technical performance.
The statement “surpasses the existing benchmark results in the market” implies that the Hyper-V/QLogic benchmark has outperformed a comparable VMware benchmark. The press release was careful to state the hypervisor and fibre channel HBA (QLogic 2500 Series 8Gb adapter), but failed to mention the back end storage configuration. I consider this to be an important omission. After some digging around, I was able to find the benchmark results here. If I was watching an Olympic event, this would be the moment where after thinking I witnessed an incredible athletic event, I learned that the athlete tested positive for steroids. Microsoft and QLogic didn’t take a fibre channel disk array and inject it with Stanzanol or rub it with “the clear,” but they did use solid state storage. The storage array used was a Texas Memory RamSan 325 FC storage array. The benchmark that resulted in nearly 200,000 IOPS, as you’ll see from the diagram, ran within 90% of native performance (180,000 IOPS). However, this benchmark used a completely unrealistic block size of 512 bytes (a block size of 8K or 16K would have been more realistic). The benchmark that resulted in close to native throughput (3% performance delta) yielded performance of 120,426 IOPS with an 8KB block size. No other virtualization vendors have published benchmarks using solid state storage, so the QLogic/Hyper-V benchmark, to me, really hasn’t proven anything. Furthermore, the published benchmark fails to reveal latency numbers, which has been the most useful value of storage performance in virtualized environments. Applications can be very sensitive to I/O latency, and it’s import to disclose latency numbers in any storage benchmark.
For further clarity, I ran these results by a colleague well-versed in performance testing and this was his response:
In a storage stack, the number of concurrent I/Os is typically a limit at certain choke points, i.e., the virtual device, the queue between the guest and the parent OS, and the drivers in the parent. The recent Microsoft benchmark used an I/O depth of just 64, but with an SSD the latency is very small, so at 0.3ms per I/O with an SSD, it’s possible to generate 210,000 IOPS in theory at 0.3ms with 64 outstanding I/Os.
However, to properly demonstrate 180,000 real IOPS would require 1,200 concurrent I/Os, rather than the 64 used.
With real disks, the same 64 concurrent I/Os at 7ms each would limit throughput to 64 * 1/.007 = 9,142 IOPS!
To me, these exercises in smoke and mirrors trickery (i.e. solid state storage in a hypervisor storage performance “benchmark”) yield more questions than answers. In addition, I’m left questioning future benchmarks produced by vendors that use such tactics. Vendors – if you are going to go as far as issuing a press release based on a “benchmark,” please give us an honest assessment of a real world environment. Anything else simply casts doubt on your future performance numbers and adds to the already prolific cynicism surrounding vendor benchmarks.
Greetings from Catalyst 2008 in San Diego. For those of you not in attendance, I wanted to take a moment to summarize the virtualization track’s highlights. My summary follows the Catalyst format of rapid fire content, so if you have follow-up questions, please post them as a comment to this post.
Morning Keynote – Server Virtualization: What a Difference a Year Makes
In the morning keynote, I summarized Burton Group’s thoughts on the direction of virtualization. I applauded the progress made in the industry, but spent the majority of the presentation highlighting the work that still remains. This includes:
- Software licensing and support clarity and feasibility for the virtualized dynamic data center. The pressure you’re putting on vendors by leveraging RFPs to compel them to support virtualization is making a huge difference. Keep it up!
- Virtualization allows high availability (HA) to be extended to all applications, not just those that are cluster-aware. While this is great, we can do better. It’s time to rethink traditional HA architectures to include policy-driven application response. I’d like to see VM tool integration to include application monitoring components that can be passed down to the hypervisor’s native HA or to a third party orchestration tool.
- Today’s existing security models are not practical nor are they scalable for virtual environments. VMware’s VMsafe is a good start, but I’d like to see a industry standard driven by the DMTF that would ease the burden of security ISVs having to develop products to support a myriad of different hypervisors.
- There’s no reason to have multiple virtual hard disk formats. Vendors have collaborated on CIM standards for virtualization, and open virtualization format (OVF) is on the cusp on mainstream adoption by many virtualization vendors. It’s now time to settle on a single virtual disk format. This would remove the vendor lock-in concerns of many organizations as well as simplify the distribution of virtual machine appliances.
- All vendors in the desktop space have to be thinking about virtual desktops. Microsoft already won the traditional desktop game and is running out the clock. We’re on the cusp of a new generation of desktop delivery and opportunities exist for assertive, innovative vendors to leave their mark.
- Raw storage (connecting VMs directly to LUNs) improves VM performance, provides better integration with storage and data management solutions, and prevents vendor lock-in. If you’re not using raw storage in any capacity for your virtual environments, you should be asking yourself “Why not?” Sure, virtual hard disk files are nice, but even in VMware environments I can create a snapshot of a LUN presented as a raw device map in virtual compatibility mode and write the snapshot as a new .vmdk virtual disk file. So it’s possible to have the best of both worlds.
Software Licensing for Virtual Environments: Vendor Roundtable
This vendor panel included the following representatives:
- CiRBA: Andrew Hillier – CTO
- Computer Associates: Edward Marootian, Jr. – VP, Product Management and Strategy Platform Services
- Microsoft: Edwin Yuen – Senior Product Manager
- VMware: Parag Patel – VP of Alliances, ISVs and Storage Ecosystem
As fate would have it, the setup crew placed one extra chair on stage. I couldn’t resist the opportunity, so I stated that the chair belonged to a man named Mr. O’Racle who was invited to participate in the discussion but declined. The conversation was quite productive, with all vendors agreeing that organizations needed choices of licensing, with models that are applicable to virtual instances. Also, all vendors agreed that clarity in support agreements was needed. It’s not enough to say “we support virtualization.” Support incidents that require a V2P migration should be clearly defined. Also, to my delight Microsoft’s Edwin Yuen noted that Microsoft has listened to feedback from the user community and is actively working toward clarifying product licensing so that issues such as VM mobility restrictions no longer remain. I understand that changes to licensing policy are very complex, and the fact that Microsoft is responding to feedback regarding problems with product licensing is very encouraging. My hope is that within a few months we’ll no longer even need to have this discussion.
New Trends in High Availability for Virtual Environments (Richard Jones)
Richard added further clarity to my keynote message of solving IT problems in different ways, including high availability. We don’t need to continue to use legacy HA architectures when we can improve high availability by leveraging orchestration tools to monitor application state and automate the response (e.g., restart application, restart VM on the same host, restart the VM on a different host) to application failures. This is far superior to treating a VM and its installed applications as a “black box.” Highlights:
- Don’t take HA at face value. Under the hood, virtualization vendor HA architectures are vastly different. Those that offer a fan-out failover cluster architecture have a significant edge up over vendors without such solutions. To validate fan-out failover, you should evaluate virtualization HA solutions using at least 3 physical nodes. If you unplug node 1 and all VMs first try and start on node 2, then you don’t have a solution that incorporates fan-out failover. In a fan-out failover solution, node 1’s VMs would fail over and restart on the remaining nodes (2 and 3, for example).
- For cluster sizing, six nodes continues to be the sweet spot, but Richard expects that number to incrementally rise as cluster technology for virtual environments improves.
- For CPU-bound applications running in VMs, limit the number of virtual CPUs (vCPUs) to <= the number of physical CPU (pCPU) cores. This reduces the load on hypervisor CPU scheduling.
- HA architectures for Xen-based platforms continues to lag VMware HA in terms of feature-set
- SteelEye and Veritas Cluster provide good third party alternatives and application awareness for clustering virtual machines
- Forthcoming future trends include:
- Broader failure mode monitoring and response (VM’s will no longer be treated as black boxes)
- Brach office failover solutions that do not require a SAN (e.g., Stratus Avance, Marathon FT, Lefthand VSA)
- Automated business continuity response (vendors with such solutions today include VMware Site Recovery Manager and Symantec Veritas global/wide area cluster)
The Real Security Risks of Virtual Data Centers (Alessandro Perilli)
Alessandro did an excellent job breaking down security myths as well as threats to the virtual data center. Highlights:
- Be wary of the VMware recommendation of the DMZ in a box architecture. If you haven’t seen it, take a look at page 6 of the DMZ Virtualization with VMware Infrastructure white paper. Alessandro noted that software isolation has not reached a point where it can be fully trusted, and thus physical isolation of security zones is required. His points, mirrored Burton Group’s reference architecture virtualization template, so I could not agree more with his assessment.
- Any software can be compromised, including the hypervisor. Alessandro pointed out that VMware has issued over 60 patches this year alone. If you think that number is surprising, go here and take a look. Select your ESX server version from the drop-down menu, click Go, and you’ll see the results. To be fair, many of the ESX patches are for the Red Hat Enterprise Linux-based ESX console. If you do a similar search on ESXi 3.5 (the embedded hypervisor), only five patches have been issued this year.
- Alessandro discussed the threat of VMM guest hopping and pointed to a Google study as a proof of concept.
- Be wary of attack avenues against a hypervisor based on the hypervisor’s APIs (e.g. VMsafe).
- Bottom line – do not blindly trust virtualization and look to port existing security practices to your virtual environment.
Virtual Desktops: Ready for Mainstream Adoption (Simon Crosby)
Simon hammered away on the Citrix message that separation of applications from operating systems, data center workloads from servers, and desktops from PCs was key to virtualized desktop delivery. He also noted that single instance storage was key to virtual desktop scalability and feasibility on an enterprise scale. I agree. Even at a modest 4GB per virtual desktop image, if you consider 2,000 desktops you would need 8TB of storage. Separation and runtime insertion of applications will clearly play a large role in the future virtual desktop, and Crosby was quick to highlight Citrix’s position in this area. When asked about a hypervisor for the physical desktop, Crosby indicated that such a solution was not something Citrix would have in the near future. To me, the desktop hypervisor is important, as it would allow the mobile user to maintain separate and secure work and personal environments. I see the desktop hypervisor as a long term requirement of virtual desktop infrastructures as it will be warranted by some use cases.
Simon naturally was pushing the Xen architecture is his usually subtle ways, but not all Citrix customers have been ready to run Citrix XenDesktop on XenServer. In fact, one of the Citrix XenDesktop customers highlighted at the recent Citrix Synergy conference was running XenDesktop on VMware Virtual Infrastructure 3. Still, there’s no doubt that XenServer development is coming along quickly and I agree with Simon’s assessment that booting thousands of desktop VMs from a single shared VM instance (with a user’s applications and profile injected at runtime) is the right architecture. As XenDesktop matures, VMware is going to need a similar delivery model in order to remain competitive. VMware Server has supported linked cloning for years, which could provide a similar type of service. So it’s not out of the question for us to expect to see a similar architecture in ESX environments at some point. Still, when linking virtual disk images to support a large VDI environment, VMware is going to have to show us some very good scalability numbers for such an architecture to be considered enterprise-ready. On the other hand, leveraging single instance storage features in the array, such as with a Network Appliance filer, is something you can do already for both VMware and Citrix virtual desktop environments today. The simplicity of managing virtual desktops, user profiles, and desktop applications is ultimately going to determine who wins the virtual desktop war. Citrix has shown specific examples of how they make this all possible. It’s time for VMware to do the same. VMware, please don’t just point out the individual components of your VDI architecture. Show us how you can go head-to-head with Citrix with regards to application management and lowering the cost of ownership for managing desktop operating systems. Citrix has shown us an end-to-end solution for desktops, applications, and user profiles. Many Burton Group clients are considering deploying virtual desktops on a large scale and are eager to see how VMware helps them to address “the big picture.” VMware – Simon Crosby just lobbed a heavy virtual desktop volley in your direction. What’s your response?
Note: Originally posted to Burton Group’s Data Center Strategies blog.
Tony Asaro, Virtual Iron’s new Chief Strategy Officer, recently blogged on Virtual Iron’s place in a competitive hypervisor market. Regardless of which hypervisor wagon you’re currently tied to, Tony’s perspective makes for a very good read, and I’m glad to see Virtual Iron embracing the SME market which has been the greatest beneficiary of Virtual Iron’s solution set.
Here’s an excerpt:
Where does Virtual Iron fit into this picture? Right now, we are the little guy in a land of giants. Virtual Iron has a really good product. We have thousands of production implementations (and rapidly growing) and a healthy and increasingly strong channel. However, in spite of this, we are inconsequential to the ecosystem I’ve been talking about. But guess what? It really doesn’t matter.
Where we win – where we matter – is with small and medium enterprises (SME). They have no loyalty to VMware. They are looking for a server virtualization solution that has all of the advanced capabilities and features they need to protect and manage their environments; they want an easy to use solution; and it has to be cost effective so that it doesn’t consume the lion’s share of their IT budget. That is what we bring to the table and it is really a no-brainer for them once they get their hands on it. We also matter to the channel. Many of our channel partners feel that VMware is oversaturated. Since everyone is selling it, they can’t make any money. And their SME customers can’t afford VMware, so they are looking for an alternative. We are that alternative.
You can read Tony’s full post here – A Virtual Monopoly.