Archive for category Virtualization Management
Over the past twelve months, the innovation in the client virtualization space has been pretty remarkable, and several solutions will be on display at next week’s Citrix Synergy conference. Because of the sheer volume of vendors on the show floor, I thought I’d point out vendors that may be under your radar but are worth visiting. Granted, there are literally hundreds of vendors in the space and most offer some value to your virtual infrastructure. Rather than point out the obvious ones that you’d visit anyway (e.g., Citrix, VMware, Microsoft, HP, Wyse, AppSense, Symantec, Quest Software, and RES Software), I thought I would point out some of the not-so-obvious. I know… get on with it, so here they are.
Virtual Bridges will be showing the 4.0 release of their VERDE suite. VERDE is a server-hosted virtual desktop solution that competes with products such as Citrix XenDesktop, VMware View, and Quest vWorkspace. What’s unique about VERDE is that its backend infrastructure has no Windows requirements, which has appealed to cost sensitive organizations and service providers. VERDE 4.0 has some interesting features such as CloudBranch, which allows organizations or service providers to support low bandwidth WAN connections by deploying a proxy-like caching server to a remote site. This approach allows organizations to retain centralized management while serving up virtual desktops over the remote office LAN – key to satisfying user experience requirements. I had blogged about this type of approach last year. There’s quite a bit in the core architecture, including Windows, Linux, and Mac endpoint support and even local desktop support (like VMware View offline desktops or MokaFive) that can be booted from a USB drive. For more information on VERDE, take a look at Gabe Knuth’s review on brianmadden.com.
Citrix recently made an investment in Kaviza and since that time (April) Kaviza has been getting considerably more attention. Kaviza offers a VDI-in-a-box solution that is complementary to existing desktop virtualization solutions. With Kaviza, organizations can deploy a single on-premise server to host virtual desktops at a remote site. The solution is hypervisor agnostic and currently supports VMware vSphere and Citrix XenServer. The solution gives organizations or service providers a way to deliver virtual desktops to remote facilities without having to worry about WAN connectivity impacting performance. Kaviza was one of the vendors that participated in my Virtual Desktop NAS Vendor Challenge last year and you can read more about their solution in this post.
RingCube was another participant in the Virtual Desktop NAS Challenge and has gotten considerable traction over the last twelve months (you can read their guest post here). Having large enterprises such as ING Bank using RingCube’s vDesk product in production, has helped to establish RingCube’s credibility. RingCube’s client-hosted virtual desktop solution allows users to run their virtual work space on their endpoint system, leveraging the local OS resources. This means that a separate VM isn’t needed. The vDesk architecture is closer to OS virtualization in its approach. Without the added overhead to run a separate full blown VM, organizations have liked the fact that they can use existing endpoint hardware (without having to upgrade memory, for example) for the vDesk solution.
Wanova was one other participant in the VD-NAS Challenge and their guest post is available here. Wanova leverages some very intelligent streaming technology they call Distributed Desktop Virtualization. The solution centralizes desktop OS and application management and can be used to deploy user environments to physical or virtual endpoints. Wanova’s solution also allows IT to support a single base image for a desktop OS type while also supporting user-installed applications.
Unidesk is an interesting company that started making noise at last year’s VMworld North America conference. Unidesk positions itself as a complement to VMware View and Citrix XenDesktop (you can read about their architecture here), and prides itself on its layering technology. With Unidesk, IT can manage shared non-persistent golden virtual desktop images while still giving users the ability to install their own applications. User-installed apps with non-persistent images is an extremely difficult engineering challenge (both VMware and Citrix will admit this), and Unidesk claims to have the answer. Their booth is definitely worth a stop when you venture into Synergy’s expo hall.
There will be plenty of attention devoted to the bare metal client hypervisor at Synergy. While folks wait out the general availability of the Citrix XenClient and VMware Client Virtualization Platform (CVP) solutions, they have the opportunity to look at a bare metal client hypervisor shipping today – Virtual Computer’s NxTop. Many of our large enterprise clients see the client hypervisor as a 2012 initiative, but that’s not to say the technology isn’t useful as a small business or department-level solution today. Also, even if your plans for client hypervisor are further down the road, it’s always good to begin building your knowledge base of the technology and to start thinking about the governance issues (e.g., treatment of personal user VMs on the corporate LAN) they create.
Server Virtualization and Cloud
Synergy is starting to pick up steam as a server virtualization and cloud event, and I didn’t want to ignore some of the innovative vendors in that space too. Vendor booths that I’ll be stopping by include:
Plenty Else to See
Like I said previously, there are plenty of other vendors that are bringing value to the industry. For example cruising by the McAfee, RSA, and Trend Micro booths is a good idea. All three vendors are brining considerable innovation to security and compliance in virtual server and desktop infrastructures. In the storage space, I recommend visiting the booths of the three winners of the Citrix Ready StorageLink Challenge: NetApp, HP, and GreenBytes.
Between the emerging solutions, excellent presentations, and always engaging hallway discussions, Synergy is shaping up to be a great conference. I hope to see you there. If I failed to recognize a particular product you find interesting, please post it as a comment.
With the postponement of Catalyst Europe, I had the opportunity to virtually attend the Microsoft MMS conference keynotes on Tuesday and Wednesday of this week. MMS has long been one of Microsoft’s best conferences, and this year didn’t disappoint. I’m not going to rehash the major announcements, but you can read the full details in the following Microsoft System Center team blog posts:
- System Center Service Manager 2010: An Integrated Platform for IT Service Management
- Configuration Manager vNext: User Centric Client Management
- Mobile Device Management with Configuration Manager vNext
- Configuration Manager 2007 R3 Beta released
- What’s coming up with the next versions of SCOM and VMM!
- User Centric, and System Center Configuration Manager vNext
- MMS 2010 Kicks Off in Las Vegas
Tuesday Keynote – Bob Muglia
- Bob opened by stating that Microsoft’s has been building dynamic IT management for the last seven years as part of their Dynamic Systems Initiative. Microsoft is essentially underlining the fact that they are not newcomers to dynamic IT and cloud and is playing on its strength in systems management.
- Bob highlighted the need for standard service models, and I agree. I started discussing this topic with vendors in 2008 and blogged about standard models in the security context early last year. I recently discussed this issue in my posts on metadata standards and the infrastructure authority. Still, vendors need to move beyond talk about standards for service delivery, metadata, and application interfaces, and deliver them. Mobility to and among cloud infrastructure-as-a-service providers requires these standard models. It’s time for vendors to show their hands, even if they’re holding proprietary service delivery models, metadata sets, and interfaces today. There are far too many competing interests to expect vendors to agree on an industry standard any time soon. Still, progress is being made on the standards front. SNIA’s Cloud Data Management Interface (CDMI), the DMTF’s Open Cloud Standards Incubator, and the Cloud Security Alliance’s work on standard cloud security models are three good examples.
- It would be nice if Microsoft would offer a complete set of documentation on how to stage their on-stage demos. The orchestration practices that were demonstrated are of high value and Microsoft should share the configuration information with their clients.
- Microsoft’s demos were Microsoft-centric, as expected. I would like to see Microsoft demonstrate integration with third party management products, which would strengthen their position on interoperability. Most Gartner clients are not homogenous Microsoft shops; demonstrating orchestration capabilities across multi-vendor management stacks would speak to the needs of the typical enterprise organization. If Microsoft doesn’t want to do this at a conference, then why not offer this information online?
- I thought Microsoft made a great move in acquiring Opalis, and liked seeing the Opalis integration and System Center Service Manager 2010 shown on-stage.
- Microsoft demonstrated long distance VM live migration, and in the process Muglia took a swipe at VMware, noting that moving VMs to new sites requires deep integration and validation across all management services. In the demo, Microsoft was able to show processes such as validating that a recent backup was completed before allowing the inter-site live migration to continue. While the demo was impressive, I would have been even more impressed if Microsoft validated the recent backup by integrating with a third party tool such as NetBackup.
- Microsoft is talking cloud using the terms “shared cloud” and “dedicated cloud.” There are so many disparate terms out there for cloud that pretty soon Rosetta Stone will release a CD on speaking cloud. The Gartner/Burton teams have been working closely on defining a core set of cloud terminology, and its important for vendors in the space to adopt common definitions.
- Edwin Yuen demonstrated System Center Virtual Machine Manager (SCVMM) vNext, which will include drag-and-drop capabilities for deploying multi-tier applications. The demo was powerful, but my existing concerns about SCVMM were unanswered. Today the product is not extensible, and it does not support the Open Virtualization Format (OVF) industry standard; I’m hoping those two features make it in to SCVMM vNext.
- Microsoft’s demo of cloud service management looked solid from the administrator’s point of view, but nothing was shown from the consumer’s point of view. IT service delivery requires the presentation of services to consumers using intuitive interfaces that the customer understands. Microsoft has yet to show a consumer-centric view of how consumers will interact with Microsoft cloud service management.
Wednesday Keynote – Brad Anderson
- Brad opened by talking about how the Windows 7 release was the most significant event in the desktop space in a very long time. I would counter that equally significant was Microsoft’s announcement to end-of-life (EOL) Windows XP in April 2014. The XP EOL announcement put IT organizations “on the clock” to replace their existing client endpoint OS, and in many cases re-architect all major aspects of user experience and application delivery.
- There was a good discussion about power management, but one interesting area of research that was not mentioned was Microsoft’s work on in-guest VM power management. Take a look at Joulemeter for more information.
- I liked hearing Brad talk about the future desktop representing a convergence of services. This is a concept I recently discussed in the post “The Next Gen Desktop’s Cloudy Future.”
- I saw a bit of irony seeing Microsoft discuss Hyper-V R2 SP1’s dynamic memory feature on stage. A year ago Microsoft was solidly against VMware’s memory overcommit feature, which allows VMs to share a physical pool of memory on a server. Jeff Woolsey did a nice job describing Hyper-V’s dynamic memory capabilities in the following posts:
- Microsoft demonstrated the RemoteFX technology that was acquired from Calista. It will be interesting to see how quickly Microsoft’s IHV partners offer a shipping solution. Several have stated their intent to support the technology.
- Microsoft demonstrated their new Windows InTune product – a cloud service for managing PCs. While I like where Microsoft is taking PC management, I’m still disappointed that they have yet to address desktop OS licensing for cloud-based desktop-as-a-service (DaaS) deployments. Device-based desktop OS licensing is incompatible with the on-demand and device-agnostic nature of cloud service delivery, and Microsoft needs to address this issue sooner rather than later.
- I was disappointed by the System Center Service Manager demonstration on compliance validation. The demo included no mention of virtualization or virtual infrastructure, which is the default x86 application platform of many of our clients. If the product is not providing controls and validation capabilities for multi-tenant VMware vSphere, Microsoft Hyper-V, and Citrix XenServer environments, then it is not ready for prime time.
Overall I was very impressed with the conference keynotes. System Center Service Manager and Microsoft’s increasing integration of the Opalis software are two areas to watch. Muglia’s talk about standard service delivery models also leads me to believe that Microsoft is poised to aggressively go after the cloud provider space. The release of Microsoft’s Dynamic Infrastructure Toolkit and growing number of partners in Microsoft’s Dynamic Data Center Alliance (DDA) is proof of that. What did you think of MMS 2010? I’d love to hear your thoughts.
Yesterday VMware acquired several products from EMC’s Ionix management portfolio: Server Configuration Manager (formerly Configuresoft), FastScale, Application Discovery Manager (formerly nLayers), and Service Manager (formerly Infra). In my opinion, the move makes sense because the products will get considerably more traction under the VMware brand than as part of the EMC Ionix umbrella. VMware has successfully launched several new management products resulting from acquisition or internal development – Lab Manager, Lifecycle Manager, CapacityIQ, and Site Recovery Manager – to name a few. I expect similar success for the products that were under the Configuresoft and Fastscale brands as well (prior to their acquisition by EMC).
While a broad management portfolio is needed, seamless integration is far more important to our clients’ cloud initiatives (with a heavy reliance on automation) than a platform consisting of several required but disjointed products. I recently discussed these concerns in greater detail in these posts:
- The Cloud Mystery Machine: The Need for an Infrastructure Authority
- VMworld Day 1 Keynote – A Few Thoughts
That brings me to VMware. The process of collecting “functionality checkboxes” is a longstanding strategy taken by leading management vendors. Rather than integrate, vendors too often rely on product marketing. I think this picture from oddlyspecific.com captures the essence of enterprise management product marketing pretty well.
Let’s face it. It’s far easier (and less costly) to tell an integration story via creative product marketing than it is to actually commit the engineering resources to build a truly integrated solution. In other words, let’s take a management platform built to solve a specific problem years or decades ago, and bolt on some other products to make it relevant to today’s management challenges. The big four in management (i.e., BMC, CA, HP, and IBM) have historically taken this approach, and one could say that VMware is doing the same. In addition, one could argue that CA’s 3Tera acquisition is another “sign of the times.”
When vCenter and its backend database were originally architected, enterprise cloud management wasn’t the goal. VMware has already commented on its work regarding a Linux-based vCenter VM appliance. I see the movement to the vCenter VM appliance as VMware’s chance to get vCenter right, with the extensibility needed to grow the vCenter schema as cloud management requirements further mature.
Over the last couple of years, VMware has worked to transition itself into a management company. I blogged about this transition at VMworld 2007 and my Gartner colleague Cameron Haight also discussed this trend in his 2007 document “Why VMware Must Morph into a Management Company.” If that transition follows historical patterns, VMware’s clients should worry. Automation and infrastructure-as-a-service are far too complex to be solved by legacy bolt-on management approaches. The existing disconnect between VMware’s DRS service and other management products or features such as vShield Zones and CapacityIQ are just two examples, but there are plenty more. VMware’s problem, like many, is not that the left arm doesn’t know what the right arm is doing. If the arms were attached to the same body, that would be a good start. Instead, we have a bunch of individual cooks, all with their hands in the virtual infrastructure pot. VMware made a good move yesterday to broaden it’s management portfolio; however, they have a lot of work to do. Cloud and infrastructure-as-a-service are highly disruptive to traditional IT operational management, and in turn creates opportunities for new vendors to unseat the big four in the enterprise management space. Following a path that leads to legacy bolt-on style disjointed enterprise management isn’t going to cut it. That being said, I know that VMware is smarter than that, and let’s hope their clients won’t be led down a path with a familiar ending.
On Monday VMware announced that they will acquire SpringSource – a move that I believe was both essential and astute.
Let’s start with essential. Sure – it’s hard to argue against VMware’s merits as a hardware infrastructure-as-a-service (HIaaS) platform for both internal and external clouds. Note that for more information on cloud platforms and definitions, Drue Reeves’ paper “Cloud Computing: Transforming IT” is a great place to start. Living at the bottom of the stack (i.e., infrastructure) can only take VMware so far. Microsoft is confident that they can methodically chip away at VMware’s market share by tying their applications and management platform to their own Hyper-V-based virtual infrastructure. Richard Jones’ post “Virtualization Wars, what can VMware learn from the past?” talks about this issue in great depth. VMware’s infrastructure focus is its strength, but at the same time its Achilles heel. VMware needs to move up the stack to keep up with Microsoft long term.
I see the move as astute because SpringSource gives VMware the right platform at the right time. Chris Haddad – with our Application and Platform Strategies Service – detailed how a combined VMware and SpringSource platform will impact application development. Virtualization (i.e., server, client, application, storage, I/O, and network) and cloud are fundamentally changing the way that both server and desktop applications are delivered. Last year I wrote about how this transformational period creates opportunity for Microsoft’s competitors such as Apple. Cloud-based application delivery (both internal and external) is equally disruptive to traditional server application delivery models. What this means is that the time to redefine application delivery and to unseat the incumbents is right now.
At Catalyst, Microsoft’s Mark Russinovich talked about how virtualization is changing the way we think about OS and application delivery. Even Microsoft knows that the monolithic OS is not the platform of the future. From a platform developer’s point of view, the hypervisor represents the new hardware. OSs incapable of leveraging the dynamic nature of virtualization simply do not give applications the dynamic capabilities they may need. For example, suppose you want to add more compute resources to a Windows application. For stateless applications, you could deploy them in a network load balanced cluster and scale them out across multiple VMs. For stateful applications, scaling out isn’t as easy (some do this better than others). If you want to hot-add an additional virtual CPU, for example, the guest OS had better be Windows Server 2008 Datacenter edition. No other edition supports CPU hot-add. Clearly, this is an area that Microsoft will address, but it speaks to the problem. The same can be said for hot added memory. If the application requires a restart in order to take advantage of dynamically added memory, such features are of little value. While we’re talking about density and cloud, we also must think about efficiency. When deployed to a single physical box, application processing efficiency may not have meant so much for tier 2 or 3 apps. Put an app on a cloud-based infrastructure that uses consumption-based pricing (thinking future) and it’s another story. The more efficiently that an application and its underlying platform process data, the less expensive it is to run in the cloud.
Finally, I don’t see VMware as grabbing SpringSource to just have the “PaaS checkbox” to line up against Microsoft’s Azure platform. VMware has to think beyond that. Getting a platform for PaaS with a large install base and developer ecosystem gives VMware instant credibility, and broadens the services that can be offered by VMware’s service provider partners. Tomorrow’s dominant cloud providers won’t just have the checkboxes, but they’ll offer innovative, dynamic management, while being highly efficient (again the fewer the CPU cycles, the lower the cost). Microsoft, Citrix, and other competitors in the cloud space also realize that the opportunity exists for new cloud-centric platforms. Having a good core platform is clearly important, but tightly aligning applications and infrastructure to handle the dynamic nature of virtualization and cloud (e.g., dynamic resource add/subtract, distributed data access and potential latency, and security) is new territory. VMware’s core leadership, including Paul Maritz and Todd Nielsen have been there before, dating back to their time at Microsoft. The next steps that VMware takes will likely define how history repeats itself.
History will repeat, but which history will it be – Paul and Todd helping build the market leading software infrastructure? Or Microsoft using tight application, OS, and management integration to thwart competitors?
VMware’s clever marketing folks can call their latest release whatever they want (vSphere 4.0 if you’re keeping track), but to me it’s the V Series Mainframe.
VMware is taking mainframe-class availability, performance, and infrastructure management principles and bringing them to commodity hardware. vSphere 4.0’s release, in my opinion, makes it hard to argue against VMware’s intentions of a software mainframe. VMware Fault Tolerance (FT), for example, is one of the new features that provides the availability levels required by many tier 1 applications. This is especially critical for home grown tier 1 apps that do not have built-in resiliency. VMware FT allows a single VM to be kept in lock-step on 2 physical hosts, allowing the VM to be fully resilient to the failure of a physical host. FT is a 1.0 release and does have some limitations. For example, only VMs with a single virtual CPU are supported. Note that this isn’t a VMware-only limitation since competitors such as Marathon Technologies cannot mirror VMs with multiple vCPUs either. It’s a tough problem to solve and one that’s going to take some time.
Back to the announcement. VMware is further cementing their position that they are a cloud OS, and I agree. Many tasks associated with the traditional OS (e.g., resource scheduling, QoS) are moving down to the hypervisor and associated virtual infrastructure, so the roles of the traditional monolithic OS are changing. VMware isn’t alone here. Microsoft is retooling its OS and application delivery methodologies as well.
Many of our clients are interested in internal cloud and are looking for practical steps they can do today to start down the cloud path. Most aren’t able to seriously consider external cloud out of concerns regarding security and regulatory compliance. However, organizations can begin building an internal cloud architecture that is capable of leveraging external cloud resources once the predominant security and compliance riddles are solved. VMware is banking that if vSphere is the foundation for an enterprise’s internal cloud, the enterprise will look at vSphere-based external cloud resources once the time is right.
vSphere 4.0 includes a laundry list of new features, including:
- Distributed Power Management (DPM)
- Availability of the VMsafe API
- Distributed virtual switch and support for the Cisco Nexus 1000V
- Host profiles
- vShield Zones
- Thin provisioned virtual hard disks
- Thin provisioned virtual hard disks
- I/O as a factor in VM placement
- Hot memory add
- Partial host failure detection
When DPM was announced with ESX 3.5, Burton Group advised clients to stay away from using it for two reasons:
- VMware considered the feature “experimental” and wouldn’t officially support it themselves
- VMware’s IHV partners would not officially support it either
I had blogged on DPM back in November, and while the post incited some strong vendor reaction, it served its purpose – move the discussion forward on official IHV support for DPM, an assurance many of our clients wanted before they would consider implementing it in production. VMware now supports DPM, which is a good first step. Burton Group has had serious dialogues with all major IHVs on the topic of DPM for nearly a year, and we believe that official support from the server IHVs isn’t far off, so stay tuned. DPM can result in substantial power and cooling cost reduction by shutting down unneeded servers in a given ESX cluster, and then turning them back on once they’re needed again. Once the IHVs step up and do their part, I expect some Burton Group clients to begin implementing DPM for some of their workloads.
Thin provisioned virtual hard disks (i.e. virtual disk files that grow as data is added to them) isn’t a new concept. VMware Workstation has had this feature since its inception. It’s in other hypervisors such as Virtual Server 2005 and Hyper-V too. It was even in ESX 3.5, but wasn’t officially supported. VMware is high on thin provisioned virtual disks, but keep in mind that there will be a small performance overhead associated with using this feature. VMware has yet to publish a benchmark illustrating the performance tax. For enterprise implementations, thin provisioning is best done in the storage array. For smaller deployments and for deployments involving arrays that don’t support thin provisioned storage, using thin provisioned virtual disk files can result in considerable storage savings.
During Steve Herrod’s keynote, he passionately stated how vSphere 4.0 can run practically any x86 workload, including high end I/O intensive databases. He further went on to tout the values of VMware’s distributed resource scheduler (DRS) and DPM features. Both DRS and DPM relocate workloads to balance performance utilization across an ESX cluster or to shut down unneeded physical servers. That all sounds good, but there’s one problem. The intelligence used by DRS and DPM only takes memory and CPU utilization into account when determining VM placement. I/O utilization is ignored. This means that it’s possible that a relocated VM will be I/O bound as soon as it lands on a new physical host. I’ve talked to VMware technology partners who are eager to provide deeper I/O utilization information to vCenter. The problem is, however, that vCenter doesn’t have an API that can be used for this purpose, nor does it have the metadata structure for storing this type of information. Until VMware can factor I/O into VM placement decisions, you should use caution when considering enabling DRS or DPM on I/O intensive workloads. To be fair, VMware’s competitors don’t take I/O into account for VM placement decisions either, but I still see it as something that needed to be pointed out.
Hot resource add (e.g. RAM) is a nice new feature too. One thing to remember though is that hot adding memory to a running VM is only useful if the application running inside the VM can take advantage of the new memory without a restart. If the application must be restarted, the VM is as good as offline anyway. That being said, the way applications are able to leverage hot memory add should be something you’re querying prospective software vendors about, and should be something you include in RFPs.
On the high availability side, I’m still waiting on partial node failure detection. Why is this important? Consider an ESX host that’s online, but due to physical storage controller failure is unable to meet required service levels. So while storage access may remain thanks to multipath support, you may not have enough available I/O to meet service level requirements. Intelligence that allows the cluster to rebalance VMs due to reduced I/O availability, for example, further brings vSphere closer to VMware’s goal of building a software mainframe.
Based on the last few paragraphs, it may seem like I’m raining on VMware’s parade, but that’s not my intent. vSphere 4.0 is a major release, and if the massive performance improvements measure up to VMware’s claims, the hardware savings resulting from the associated VM consolidation densities will be enough to cost-justify a vSphere 4.0 upgrade. Of course, the enhanced security (e.g. VMsafe API) and networking features with accelerate adoption as well.
Once again, VMware’s raised the feature bar. Next, I’m looking forward to see how Citrix and Microsoft respond at each of their conferences next month.
Note: Originally posted to Burton Group’s Data Center Strategies blog.
Date: June 16th
Cloud is not the latest marketing buzzword, but rather a serious and emerging datacenter architectural framework. Don’t let the excess hype and vendor misrepresentations fool you or distract you from how you can practically leverage the cloud today. This session identifies the technology and business process frameworks necessary for building an internal cloud infrastructure, while laying a foundation to leverage external cloud resources. A properly designed cloud infrastructure offers too many financial, technical and business process benefits to ignore.
This session cuts through the cloud hype and offers guidance on how you should be thinking about the cloud today, and what you should be doing right now. Even if you are not ready to leverage external cloud resources today, internal architecture and product decisions will directly impact your organization’s ability to connect to external cloud resources in coming years.
Session attendees will see:
- Strategies for overcoming the technical and political barriers that stall internal cloud adoption.
- Blueprints for architecting an internal cloud.
- Reality check on what is and is not possible with today’s management tools.
- Guidance on ensuring security and regulatory compliance with both internal and external cloud infrastructures.
VMware Webinar: Best Practices for Capacity and Configuration Management with Virtual Infrastructure
Event: VMware Webinar: Best Practices for Capacity and Configuration Management with Virtual Infrastructure
Date: April 29th
Capacity and configuration management are often misunderstood or overlooked when virtual infrastructures are first deployed. This session examines practical capacity management practices, with a special focus on ensuring that applications meet required service levels. Attendees will learn about setting alert thresholds, dynamic VM load balancing and ensuring that I/O, compute and memory requirements are met. The second half of this session focuses on configuration management topics, while highlighting tools and trends that ease configuration management and enforcing change control processes in the virtual infrastructure.
Attendees of this webcast will see:
- Proven industry practices for effectively monitoring, managing and automating all aspects of capacity management, including compute, memory, storage and network I/O, and storage capacity.
- Strategies for documenting service level assurance in the virtual infrastructure.
- Configuration management best practices and available tools for getting the job done.
If you haven’t heard already, the mad scientist Andrew Kutz is at it again. This time he’s created a VI client plug-in that converts generic VM icons in the VI client interface into OS-specific icons. It’s pretty slick, and you can read more about it here. Here’s an image showing how VMs appear once the plug-in is installed:
Posted by Chris in Citrix, Citrix Project Independence, Desktop Virtualization, ESX, Hosted Client Hypervisors, Hyper-V, Microsoft, Neocleus, Process VM (RingCube), Server Virtualization, VMware, VMware CVP, VMware Player, VMware Workstation, Virtual Computer, Virtual Iron, Virtual PC, VirtualBox, Virtualization Management, Webcast, Xen on March 9th, 2009
If you missed VMworld Europe, you now can see all of the conference sessions online, assuming you have a VMworld.com subscription. If not, you may want to consider gettting a subscription, which at $699 is a pretty good bargain, considering the amount of content you get. Anyway, if you’re interested in seeing any of my VMworld Europe sessions, here are the links:
Many Burton Group clients have aggressive plans for virtualizing production applications in 2009. This is fueled by a number of factors: the economy, the emergence of hardware-assisted memory virtualization and the maturity of existing virtualization deployments. In this month’s issue of Virtualization Review magazine, I discuss new complexities that are emerging with application troubleshooting on virtual infrastructures. The article is available here.