VSI vs VDI part 1

Virtual Server Infrastructure vs. Virtual Desktop Infrastructure, Part 1
I have been working almost exclusively on virtualization projects for most of the past 5 years. Before that I spent a lot of time on Citrix/terminal services projects with the associated profile, application delivery and Windows OS optimizations. During that whole time, there has been a constant drum beat from vendors about users sessions per server, users per CPU, users per core; then VM’s per server, VM’s per CPU, VM’s per core, etc. I come from a background in retail sales and single proprietor businesses and their IT operations and I “get” the lure of these ratios and the ROI calculations and purchase price justifications they facilitate.
So many of the server based computing lessons I learned in the 1990’s have served me well in the virtualization arena particularly in the early stages when existing server system “best practices” had resulted in the often stunning lack of correlation between the amount of resources being committed to a particular application and the actual measured resource utilization. Remember all those single role/application servers running on whole physical servers. As the “virtualization journey” progressed with customers, my experience with getting a lot of production out of a fixed set of resources has certainly paid dividends for me and my customers.
When we are dealing with servers and the back office applications that run on virtual servers, the reduction in assigned resources as the physical servers are virtualized and “right-sized” has generally had little negative effect on the user’s experience. In fact it allows us to deploy additional virtual servers to scale out and load balance server applications while still increasing the overall utilization of the available resources in our server rooms and data centers. I submit that in addition to the wide spread over provisioning of resources to applications, the general user community’s lack of connection to or understanding of the various components in our server closets, server rooms and data centers has given us an incredible margin for error as we correct some of our collective sins of the past. When a typical user is using an email system, they rarely consider whether the message they send to five colleagues is stored in one place or five places, they rarely have any understanding of the impact the quantity of deleted items they keep “just in case” has on the network or storage systems nor the roles of the various servers involved with authenticating users, routing messages, scanning the message contents, archiving messages for legal requests, etc. What they care about is whether or not messages are being sent and received and that their email client feels responsive.
Recently, I find myself returning to the customer desktops and the personal productivity applications associated with them. Just like my experience starting the 90’s with NT 4 terminal server, Citrix MetaFrame, Presentation Server and XenApp, the individual user experience is the key to success. I have Citrix installations that I designed and built 5-6 years ago that are as solid now as they were when they were deployed. The key then was to standardize the server builds, standardize the application builds, separate the user profiles from the terminal servers as best as possible and eliminate unnecessary files, services, applications, recurring background network traffic, and to optimize every component involved to get the right balance of reliability, performance and customer experience. We would go to great lengths to optimize OS and application features to gain very small incremental improvements. At that time, the user interfaces provided by Windows NT and Windows 2000 were much more like Windows 95 than they were like Windows Vista allowing a lot of room to optimize the systems for performance rather than individual customer taste and personalization. We also had the constant concern about multiple users sharing the same operating system instance and often this concern justified the highly managed (you know, “locked down”) user environment as a requirement for stability. Often, users that needed (probably more correctly expected or demanded) more flexibility or individual control of their computing environment were given a “personal” computer where they could have much more freedom to customize and change things without the restrictions imposed by the “shared” environment.
So, now we are in the “year(s?) of desktop virtualization” and expected to provide the consolidation ratios and the centralization of supporting resources associated with virtualization that are touted by vendors and their salespeople, but the game has changed. This time the costs are harder to offset. We can’t repurpose recent vintage desktop pc’s as hosts for the virtual desktops and transfer their operating system licenses, like we may have been able to do with server virtualization. If we install our virtual desktop client software on the user’s traditional desktop, then we don’t have to account for the cost of a replacement physical endpoint device, but we will still have to maintain, manage and license whatever operating systems and applications remain. If, on the other hand, we decide to replace the traditional desktop pc with some dedicated thin client device, the cost of these devices will be judged against the replacement cost of the traditional desktop pc’s. In either case, the cost of the virtual desktop connection brokers, the operating systems and applications running on the virtual desktops and the underlying virtualization platform including compute, networking and storage systems has to be considered. With server based applications the actual network and storage requirements for an individual user are aggregated with the other users of the various applications. For many customers (and they don’t really have to be very large) the sheer volume desktop pc resources in terms of CPU cycles, RAM in GB, network bandwidth per desktop NIC port and storage capacity in GB and throughput in IOPS versus their server infrastructure can be staggering. As a result, the resources required per user to deliver virtual desktops can be much greater than the per user resources required for server apps. Think about the requirements for an Exchange system with 500 mailboxes vs. the requirements for 500 virtual desktops with Outlook clients. While we can increase the user mailbox count on the Exchange system by using client-side features like cached mode in Outlook; when we do, we transfer some of the resource requirements, like the storage space for the cached mailbox copy, to our client systems, now running as VM’s that require resources in our server room or datacenter, not just out on the office floor with the users. In the example of Outlook cached mode consider that we now have an additional copy of a (or each) user’s mailbox being stored on our centralized storage systems. Now mailbox quotas could effectively cost twice! Get used to this kind of calculation – the desktop resource requirements are transferred from the individual dedicated computers to the shared infrastructure.

To be continued…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s