Category Archives: Vdi-in-a-box

Some Straight Talk about VDI-in-a-Box

Update: The advent of solid-state drives allows you to eliminate IOPS as a potential bottleneck. The calculations below are based on 15K SAS drives that support roughly 175 IOPS each. A typical 200 Gb SSD will support tens of thousands of IOPS. On the other hand, although SSD prices are coming down, they’re still rather pricey. Replacing the eight 146 Gb, 15K SAS drives in the example below with eight 200 Gb SSDs, and loading it up with RAM so you can support more virtual desktops, will push the price of the server to nearly $20,000. So the primary point of this post still stands: While VDI-in-a-Box is a great product, and can be competitive with physical PCs when the entire lifecycle cost is compared, you’re just not going to see significant savings in the capital expense of ViaB vs. physical PCs. That doesn’t mean it isn’t a great product, and it doesn’t mean you shouldn’t consider it. It just means that you need to validate what it’s really going to cost in your environment.

Original Post (April, 2012):
There is a lot of buzz about Citrix VDI-in-a-Box (“ViaB”), and rightly so: it’s a great product, and much simpler to install and easier to scale than a full-blown XenDesktop deployment. You don’t need a SAN, you don’t need special broker servers, you don’t need a separate license server or a SQL Server to hold configuration data. Unfortunately, some of the buzz - particularly some of the cost comparisons you see that show a $3,000 - $4,000 server for 30 or more virtual desktops, is misleading. So let’s talk seriously about the right way to deploy ViaB. For this exercise, I’m going to assume we need 50 virtual desktops. Once we’ve worked through this, you should be able to duplicate the exercise for any number you want.

First of all, I’m going to assume that we are building a system that will support Windows 7 virtual desktops - because I can’t see any valid reason why someone would invest in a virtual desktop infrastructure that couldn’t support Windows 7. There are two important data points that follow from this: (1) We should allow at least 1.5 Gb per virtual PC, and preferably 2 Gb per virtual PC. (2) We should design for an average of about 15 IOPS per Windows 7 virtual PC, because, depending on the user, a Windows 7 desktop will generate 10 - 20 IOPS. Let’s tackle the IOPS issue first.

Thanks to Dan Feller of Citrix, we know how to calculate the “functional IOPS” of a given disk subsystem. Here are the significant factors that go into that formula:

  • A desktop Operating System - unlike a server Operating System - has a read/write ratio of roughly 80% writes and 20% reads.
  • A 15K SAS drive will support approximately 175 IOPS. The total “raw IOPS” of a disk array built from 15K SAS drives is simply 175 x the number of drives in the array.
  • A RAID 10 array, which probably offers the best balance of performance and reliability, has a “write penalty” of 2.

With that in mind, the formula is:

Functional IOPS=((Total Raw IOPS x Write %)/(RAID Penalty)) + (Total Raw IOPS x Read %)

If we put eight 15K SAS drives into a RAID 10 array, the formula becomes:

Raw IOPS = 175 x 8 = 1,400

Functional IOPS = ((1,400x.8)/2)+(1,400x.2) = 560 + 280 = 840

If we are assuming an average of 15 IOPS per Win7 virtual PC, this suggests that the array in question will support roughly 56 virtual PCs. So this array should be able to comfortably support our 50 Win7 virtual PCs, unless all 50 are assigned to power users.

That’s all well and good, but we haven’t talked yet about how much actual storage space this array needs. That depends on the size of our Win7 master image, how many different Win7 master images we’re going to be using, and whether we can use “linked clones” for VDI provisioning, in which case each virtual PC will consume an average of 15% of the size of the master, or whether we’re permanently assigning desktops to users, in which case each virtual PC will consume 100% of the size of the master. For the sake of this exercise, let’s assume we’re using linked clones, and that we have three different master images, each of which is 20 Gb in size. According to the Citrix best practice, we need to reserve 120 Gb for our master images (2 x master image size x number of master images). We then need to reserve 3 Gb per virtual PC (15% of 20 Gb), which totals another 150 Gb. The ViaB virtual appliance will require 70 Gb. We also need room for the hypervisor itself (unless we’re provisioning another set of disks just for that) and for swap file, transient activity, etc., so let’s throw in another 150 Gb. That’s 490 Gb minimum. So we need to use, at a minimum, 146 Gb drives in our array, which would give us 584 Gb in our RAID 10 array.

How about RAM? If we allow 1.5 Gb per Win7 desktop, then 50 virtual desktops will consume 75 Gb. We need at least 1 Gb for the ViaB appliance, at least 1 Gb for the hypervisor, plus some overhead for server operations, so let’s just call it 96 Gb.

We can handle 6 to 10 virtual desktops per CPU core - more if the cores are hyper-threaded - so we’re probably OK with a dual-proc, quad-core server.

Now, I don’t know about you, but if I’m going to put 50 users onto a single server, I’m going to want some redundancy. I will at least want hot-plug redundant power supplies, and hot-plug disk drives. Ideally, I would provision “N+1″ redundancy, i.e., I would have one more server in my ViaB array than I need to support my users. I’m also going to want a remote access card, and probably an uplift on the manufacturer’s Warranty so if it breaks, the manufacturer will come on site and fix it.

By now, you’ve probably figured out that we are not talking about a $4,000 server here. I priced out a Dell R710 - using their public-facing configuration and quoting tool - with the following configuration, and it came out to roughly $11,000:

  • Two Intel E5640 quad-core, hyper-threaded processors, 2.66 GHz
  • 96 Gb RAM
  • Eight 146 Gb, 15K SAS drives
  • PERC H700 controller with 512 Mb cache
  • Redundant hot-plug power supplies
  • iDRAC Enterprise remote access card
  • Warranty uplift to 3-year, 24×7, 4-hour response, on-site Warranty

(NOTE: This is a point-in-time price, and hardware prices are subject to change at any time.)

The ViaB licenses themselves will cost you $195 each. Be careful of the comparisons that show the price as $160 each. ViaB is unique among Citrix products in that the base cost of the license does not include the first year of Subscription Advantage - yet the purchase of that first year is required (although you don’t necessarily have to renew it in future years). That adds $35 each to the cost of the licenses.

Finally, If you don’t have Microsoft Software Assurance on your PC desktops - and my experience is that most SMBs do not - you need to factor in the Microsoft Virtual Desktop Access (VDA) license for every user. This license is only available as an annual subscription, and will cost you approximately $100/year.

So, your up-front acquisition cost for the system we’ve been discussing looks like this:

  • Dell R710 server - $11,000
  • 50 ViaB licenses @ $195 - $9,750
  • 50 Microsoft VDA licenses @ $100 - $5,000

Total aquisition cost: $25,750, or $515/user. Not bad.

But wait - if we’re going to compare this to the cost of buying new PC, shouldn’t we look at the cost of ViaB over the same period of time that we would expect that new PC to last? If we assume, like many companies do, that a PC has a useful life of about 3 years, then we should actually factor in another two years of VDA licenses, and two years of Subscription Advantage renewal for the ViaB licenses. That pushes the 3-year cost of the ViaB licenses to $13,250, and the cost of the VDA licenses to $15,000. So the total 3-year cost of our solution is $39,250, or $785/user.

If you want N+1 redundancy, you’re going to need to buy a second server. That would push the cost to $50,250, or $1,005/user.

What conclusions can we draw from all this? Well, first, that VDI-in-a-Box is not going to be significantly less expensive than buying new PCs, if you actually do it right. However, it is competitive with the price of new PCs, which is worth noting. As long as the price is comparable, which it is, we can then start talking about the business advantages of VDI, such as being able to remotely access your virtual desktop from anywhere, with just about any device, including iPad and Android tablets, and about the ongoing management advantages of having a single point of control over multiple desktops.

Also, as you scale up the environment, the incremental cost of that extra server that’s required for N+1 redundancy gets spread over more and more users, and becomes less significant. For example, if we’re building an infrastructure that will support 150 virtual desktops, we would need four servers. Total 3-year cost: $128,750, or $858.33/user for a robust, highly redundant virtual desktop infrastructure. In my opinion, that’s a pretty compelling price point, and you won’t be able to hit that price point with a 150-user XenDesktop deployment, because of the other server and storage infrastructure components that you need to build a complete solution. On the other hand, XenDesktop does include more functionality, like the rights to use XenApp for virtual application delivery, ability to stream a desktop OS to a blade PC or a desktop PC, rights to use XenClient for client-side virtualization, etc.

But if all you want is a VDI solution, ViaB is, in my opinion, the obvious choice. It’s clear that Citrix wants to position VDI-in-a-Box as the preferred VDI solution for SMBs, meaning anyone with 250 or fewer users…and there’s no reason why ViaB can’t scale much larger than that.

For more information on ViaB, check out this video from Citrix TV, then head on over to the Citrix TV site to view the entire ViaB series

**** EDIT April 12, 2012 ****
You may already be aware of this, but Dell has announced a ViaB appliance that comes pre-configured, with both XenServer and the ViaB virtual appliance already installed. Oddly enough, even though Moose Logic is a Dell partner, I couldn’t get Dell to tell me what one would cost. Their answer was that I should call back when I had a specific customer need, and they would work up a specific configuration and quote it. I considered calling back with a fictitious customer requirement, but decided that I didn’t want to know badly enough to play that game.

They did, however, tell me what the basic server configuration was - and it was very close to the configuration I’ve outlined above: two X5675 processors, 96 Gb of RAM, eight 146 Gb drives in a RAID 10 array, Perc H700 array controller (don’t know how much cache, though), and iDRAC Enterprise remote access card. I do not know whether it has redundant power supplies (although I would certainly hope so), nor exactly what Warranty is included…perhaps that option is left up to the customer.

That gave me at least enough information to run a sanity check on the configuration. The array would provide 960 functional IOPS, which should be adequate for an 80 user system - which is how the appliance is advertised - depending, of course, on the percentage of power users. Also, the array should provide enough storage to handle the needs of most SMBs, unless they have an unusually large number of images to maintain.

One of my Citrix contacts recently told me that the Dell appliance was priced at $440/desktop for an 80 concurrent user configuration, which is very much in line with the cost per user in the post above, considering that $100 of my $515/user number was for the first year of Microsoft VDA licenses, which, to my knowledge, are not included with the Dell appliance.

More on VDI-in-a-Box and Personal vDisks

In my last post, I talked about the new release (v5.1) of VDI-in-a-Box (“ViaB”) and some of the new features. One of those new features is the addition of support for Personal vDisks (“PVDs”). But before we get any deeper into the subject, let’s take a step back, and make sure we’re clear on why you would want PVDs in the first place.

Dedicating a VM to every user consumes a lot of storage space - and, even though local storage on a ViaB server is not as expensive as SAN storage, it’s still more expensive than the storage on a desktop PC. Plus you still have the same headaches of managing and updating every individual PC. That’s why we prefer to provision virtual desktops from a common master image. It takes up far less disk space, and when you update your master image, all of the virtual desktops that are provisioned from that master image get updated the next time they reboot.

On the other hand, since the master image is a read-only image, the user has no ability to make persistent changes to the VM. In a Citrix environment, we generally handle this by using the Citrix Profile Management tool (which is included with ViaB), which allows us to write user profile changes to a shared folder somewhere else on the network, so that unique user profile data will survive changes to the underlying master provisioning image.

But there’s only so much you can do with user profiles - and one of the things you can’t do is allow users to install their own applications. Personal vDisks, which were first introduced in XenDesktop v5.6, are intended to give you the best of both worlds, by creating a persistent virtual disk that is unique to the user and that is, in simplistic terms, merged with the provisioned image at logon time. The PVD can be used to store user-installed applications, user profile data, and even user files, if you wish. So you get to provision from a common master image and give users the personalization they want.

But there’s an interesting wrinkle in the way PVDs work in ViaB v5.1. One of the major selling points of ViaB is simplicity: you don’t need all the supporting server infrastructure that a full-blown XenDesktop deployment requires, and you don’t need shared storage. That, in turn, keeps the cost down. But while ViaB is smart enough to replicate your master desktop images to all of the servers in your ViaB grid, PVDs are not replicated across the grid. Instead, a given user’s PVD gets created on the local storage of whichever server in the grid that user happens to land on at first logon - and it stays there. ViaB will then insure that, for all subsequent logons, that user will be directed to the specific server that contains the PVD.

That’s a great solution…unless that server fails, in which case the PVDs on that server are no longer available, and their associated personal desktops are broken. In all of their presentations on ViaB v5.1, Citrix glosses over this point by stating that they recommend that you periodically back up the users’ PVDs, and that they have documented the steps required to do so. And they have. Here is the process for backing up and restoring PVDs, assuming that you’re running XenServer as your underlying hypervisor, straight from CTX134792 in the Citrix Knowledge Base:

  • From the ViaB management console, look up the personal desktop that you want to back up. Note which server in your ViaB grid is hosting this desktop.
  • Note the personal disk name of the personal desktop you are backing up. This will have an intuitive name like “Windows7x64p1386d2eaab7.”
  • Insure that the personal desktop is shut down.
  • Move to XenCenter (the management tool that comes with XenServer). From within XenCenter, navigate to the XenServer in your grid that is hosting the desktop, and use the XenCenter “Export” function to export a copy of that VM. Where will you export it to? Well, assuming that you don’t have shared storage, you’re going to have to export it to some kind of storage repository that the XenServer can see which you can later move to a different XenServer in your grid…like an external USB-attached hard disk.
  • If the ultimate bad happens, and you have to restore the backed up PVD, you need to again fire up XenCenter, and navigate to a surviving server in your grid that you plan to use to restore service. Use the XenCenter “Import” function to import the VM from your backup storage repository.
  • Once it’s imported, select the VM in XenCenter, and select the Storage tab. You will see that the VM consists of two disks, one of which will match the name that you made note of way back in step 2. “Detach” that disk from the VM, and then delete the VM. That will leave you with a virtual disk (your PVD) that is not associated with any VM.
  • Now go back to the ViaB management console, select the “User Sessions” tab, and, under the “Actions” link for the non-functional desktop, select “Repair.” ViaB will scan the data stores of each server in the grid until it finds the PVD, and prompt you for confirmation that this is what you really want to do. When you confirm, ViaB will destroy the remains of the non-functional desktop, create a new linked clone on the server that now contains the associated PVD, and attach the PVD to the new linked clone. The user can now log in again.

Bear in mind that you will have to follow this manual backup procedure for every individual PVD that you have in your environment, and, likewise, if you ever have to restore PVDs, you will have to manually restore them one at a time.

Show of hands: anybody else out there think that this might be just a tiny bit onerous?

Now, if you do happen to have a SAN, you could attach a unique LUN to each server in your ViaB grid, and, provided you’re using Hyper-V or VMware as the underlying hypervisor, use that LUN for storing your master images and PVDs. (ViaB on XenServer does not support multiple datastores, according to http://forums.citrix.com/thread.jspa?threadID=312411&tstart=0.) Then you could conceivably move that LUN to a different server in your grid in the event of a server failure, and have all your PVDs back…although you still need to do periodic backups of the PVDs (maybe with SAN snapshots?), because it is possible that a PVD could get corrupted if the ViaB host goes down while the desktop OS is in the process of writing to the PVD. And, as we’ve said earlier, you’ve now negated one of the big advantages of ViaB - no shared storage requirement.

Here’s the bottom line, in my opinion: Until they come up with a way to automate the backup and restore process for PVDs, or, better yet, find a way to replicate PVDs across the grid, PVDs should be used sparingly (if at all) with ViaB. And do not use PVDs to store user profile data or user-generated files. Instead, use the Citrix Profile Manager to handle the profile data, and standard policy tools like “My Documents” redirection to direct user-generated files to a shared folder outside of your ViaB grid. That way, your worst-case scenario is that your user may have to reinstall whatever user-installed applications may have been lost when the PVD disappeared.

Finally, please note that these concerns are not applicable to a full XenDesktop deployment. With XenDesktop, you will have some kind of shared storage, and your PVDs will live on that shared storage. So: XenDesktop, PVDs are great; ViaB, not so much.

Citrix Releases VDI-in-a-Box v5.1

We just learned that Citrix has released VDI-in-a-Box (“ViaB”) v5.1. There are a number of new features in ViaB v5.1, which you can read about in the Citrix on-line documentation library, but two of these features are particularly significant:

  • Personal vDisks - This feature was introduced in the most recent release of XenDesktop, but until now was not available in ViaB. It pretty much eliminates the need to ever provision dedicated virtual desktops for anyone, because the personal vDisk can store user data, personalization information, and even user-installed applications that then get merged at logon time with the VM that’s provisioned from your master image. You can update your master provisioning image at will without affecting what’s stored in the users’ personal vdisks.
  • Virtual IP - In prior versions of ViaB, users needed to explicitly point a browser at the IP address of one of the servers in your ViaB grid. If that server failed, they needed to explicitly point a browser at a different server in the grid. That obviously creates an opportunity for user confusion. The only way around it was to have some kind of load-balancer (e.g., NetScaler) in front of your ViaB grid. But with v5.1, your grid now has a virtual IP address. That virtual IP address is initially serviced by one of the servers in the grid, but if that server fails, another server will automatically take over.

There are several other feature enhancements, including tighter integration with the NetScaler-powered CAG Enterprise, support for HDX v5.6 Feature Pack 1, support for virtual desktops with multiple virtual CPUs, etc., and you can read all about them at the documentation link provided above. But the addition of personal vDisks and a grid-wide virtual IP address take care of what were, in our opinion, the two biggest things that ViaB was lacking compared to its big brother, XenDesktop. Well played, Citrix.