Tag Archives: Application Virtualization

Why Desktop as a Service?

This morning, I ran across an interesting article over on techtarget.com talking about the advantages of the cloud-hosted desktop model. Among other things, it listed some of the reasons why businesses are deploying DaaS, which align quite well with what we’ve experienced:

  • IaaS - Businesses are finding that as they move their data and server applications into the cloud, the user experience can degrade, because they’re moving farther and farther away from the clients and users who access them. That’s reminiscent of our post a few months ago about the concept of “Data Gravity.” In that post, we made reference to the research by Jim Gray of Microsoft, who concluded that, compared to the cost of moving bytes around, everything else is essentially free. Our contention is that your application execution platform should be wherever your data is. If your data is in the cloud, it just makes sense to have a cloud-hosted desktop to run the applications that access that data.
  • Seasonality - Businesses whose employee count varies significantly over the course of the year may find that the pay-as-you-go model of DaaS makes more sense than building an on-site infrastructure that will handle the seasonal peak.
  • DR/BC - This can be addressed two ways: First, simply having your data and applications in a state-of-the-art data center gives you protection against localized disasters at your office location. If your cloud hosting provider offers data replication to geo-redundant data centers, that’s even better, because you’re also protected against a catastrophic failure of the data center as well. Second, you can replicate the data (and, optionally, even replicate server images) from your on-site infrastructure to a cloud storage repository, and have your hosting provider provision servers and desktops on demand in the event of a disaster - or, although this would cost a bit more, have them already provisioned so they could simply be turned on.
  • Cost - techtarget.com points out that DaaS allows businesses to gain the benefits of virtual desktops without having to acquire the in-house knowledge and skills necessary to deploy VDI themselves. While this is a true statement, it may be difficult to build a reliable ROI justification around it. We’ve found that it often is possible to see a positive ROI if you compare the cost of doing a “forklift upgrade” of servers and server software to the cost of simply moving everything to the cloud and never buying servers or server software again.

It’s worth taking a few minutes to read the entire article on techtarget.com (note - registration may be required to access some content). And, of course, it’s always nice to know we’re not the only ones who think there are some compelling advantages to cloud-hosted desktops!

What Licenses Do I Need….

Earlier this week, I had a long discussion with a client (you know who you are) about what licenses they would need for a deployment of “zero client” devices. We’ve written a lot about Microsoft and Citrix licensing, about XenDesktop and XenApp, about the Citrix trade-up, etc., but it occurred to me that it might be beneficial to pull all the licensing information together into one post instead of expecting you, gentle reader, to have to sort through multiple posts to pull it all together.

So, let’s discuss Citrix licensing first, then move on to the Microsoft licensing.

First of all, if all you want to do is to deploy VDI (Virtual Desktop Infrastructure), and you have a limited number of users, then you should probably purchase VDI-in-a-Box.

If you decide that VDI-in-a-Box is not the right fit foryou, the next question you need to answer is whether to use XenApp licenses or XenDesktop licenses. Beginning with the introduction of XenDesktop v4.0, Citrix concluded, reasonably enough, that an organization that was deploying VDI probably wouldn’t get much leverage from a concurrent-use licensing model, because their concurrency ratio (by which I mean the ratio of total users to concurrent users) would be pretty close to 1:1. So XenDesktop v4.0 was introduced with a per-named-user or per-device license model. These licenses were roughly half the cost of the comparable XenApp concurrent-use license: XenApp Enterprise Edition, for example, carries an MSRP of $450 per concurrent user. XenDesktop Enterprise Edition carries an MSRP of $225 per user/device.

At the same time, Citrix made the decision to include XenApp rights in the XenDesktop license. So if you buy XenApp, you get only XenApp. But if you buy XenDesktop, you get both XenDesktop and XenApp - so you can use XenApp to stream applications to your virtual desktops, or have your virtual desktops function as client devices that run published applications that execute on the XenApp servers, or simply deploy a mixture of XenDesktop and XenApp to your user community depending on what delivery method is best for a particular use case. This is what Citrix refers to as the “FlexCast” delivery model.

This created the interesting situation where, because of the difference in license cost, if your concurrency ratio was less than 2:1, you were better off financially to purchase XenDesktop licenses even if all you really wanted to run was XenApp. And, since delivering what Citrix calls “hosted shared” desktops from XenApp servers makes more efficient use of the underlying hardware and storage infrastructure, the bias should probably be toward XenApp unless there is a clear use case for why users need to connect to individual desktop OS instances rather than a shared XenApp desktop (and it isn’t just appearance, because with XenApp v6.5 on Windows Server 2008 R2 we can deliver a XenApp desktop that looks and feels like a Windows 7 desktop). But, for the sake of this discussion, let’s move on down the XenDesktop trail.

Citrix has re-introduced a concurrent-use license option for XenDesktop, which is a better choice for organizations who want to deploy both XenDesktop and XenApp, but have a concurrency ratio greater than 2:1, but so far, I haven’t seen very many use cases where that license model made sense.

If you already have XenApp licenses, and want the ability to deliver VDI as well, you can take advantage of the Citrix trade-up program to transform your XenApp licenses into XenDesktop licenses. And if you trade up all of your XenApp licenses, you can get two XenDesktop user/device licenses for each XenApp license. So 250 XenApp licenses would become 500 XenDesktop user/device licenses. If you want more information on how the trade-up program works, and what your trade-up options are, check out the handy Citrix Trade-Up Calculator.

As of the release of XenDesktop v5.0 Feature Release 1, the license service got pretty smart in terms of how it managed those user/device licenses. This is good news for, say, a hospital, which may have devices that are used by multiple users and other users who use multiple devices. The license server can intelligently and dynamically reassign licenses between users and devices to make the most efficient use of the available licenses. For example, consider the following scenario for a brand-new environment where no licenses have yet been assigned:

  • User 1 logs on from client Device 1. The license server will, by default, check out a license to User 1.
  • User 1 logs off, and User 2 logs on from the same client device. The license server, now sensing that two different users have logged on from the same device, will take the license that was assigned to User 1, and reassign it to Device 1. Any subsequent users who log in from Device 1 will not cause any action by the license server, because Device 1 is already licensed.
  • If User 1 logs on again from a different client device, the license server will again check out a license to User 1 (so, at this point, two licenses are checked out: one to Device 1 and one to User 1). Since User 1 has logged on from two different devices, the license will remain assigned to User 1 unless/until manually released by an administrator (e.g., in the case of the employee leaving the organization), or unless User 1 doesn’t log on for a period of 90 days, in which case it will be automatically released due to inactivity.
  • Likewise, since two different users have logged on from Device 1, that license will remain assigned to that device unless manually released or automatically released due to 90 days of inactivity.

So…how do you know how many licenses you really need? There is actually a formula that will tell you that. You need to know how many total users you have (let’s call that number “A”), how many shared devices you have (let’s call that “B”), and how many of your users will use only shared devices (let’s call that “C”). The formula is A - C + B. So, if you have 1,000 total users, 300 shared devices, and 600 of your users will use only shared devices, you need 1,000 - 600 + 300 = 700 total licenses.

For more information on exactly how this works, see the Citrix Community Blog post by Christophe Catesson, which in turn links to a recorded session from Synergy 2011 that was a deep dive discussion of XenDesktop licensing.

Now for the Microsoft licensing component.

If you have users who will be executing applications on a XenApp server, you will need a Remote Desktop Services (RDS) CAL for that user, or for the client device that user is using. It is very difficult to manage a mixture of user CALs and device CALs in a Remote Desktop Services environment, so, in most cases, you’re going to be better off purchasing user CALs.

If you have users who will be attaching to a virtual desktop instance, the licensing requirements are different, depending on the client device. If the client device is a Windows PC whose Operation System is covered by Software Assurance, you do not have to purchase any additional Microsoft license to use that PC to connect to a virtual desktop. If the client device is not a Windows PC, or that copy of Windows is not covered by Software Assurance, you need a Virtual Desktop Access (VDA) license for that client device. VDA licenses are only available under the Open Value Subscription license model at present, meaning that you will continue to pay for them every year. Forever.

But wait! That’s not all! As Gabe Knuth outlines in a recent article on Techtarget.com, there is a very strange loophole in the VDA license terms. If you have a VDA license for your primary device (or if it’s covered by Software Assurance), you have what Microsoft calls “Extended Roaming Rights,” which allow you to also use your home computer to access your virtual desktop, or use your iPad when you’re at home or traveling. But, technically, it does not entitle you to bring your iPad into the office and use it there! To solve that (using the term “solve” loosely), Microsoft recently announced something called a “Companion Device License” (CDL) which allows you to use up to four other devices (in addition to the primary licensed device) to access your virtual desktop. No word yet on what the CDL will cost.

So let’s see if we can summarize what our client would need for a deployment of “zero client” devices (like, for example, the Wyse Xenith thin client).

  • You’re going to need some kind of Citrix license, either VDI-in-a-Box, XenDesktop, or XenApp.
  • Since the thin client is not a Windows PC, and therefore cannot be covered by Software Assurance, you would need to purchase a Microsoft VDA license for it.
  • If the thin client will be used only to attach to a virtual PC desktop and execute applications within that desktop OS environment, no additional Microsoft license is needed. However, if the thin client will also be used to attach to applications that are executing on a XenApp server - either directly or indirectly by having the Citrix client baked into the virtual PC desktop - you will also need a Microsoft RDS CAL.
  • You do not need an RDS CAL if you are only using XenApp to stream packaged applications to a virtual (or physical, for that matter) desktop for execution there. Since you are not actually utilizing Remote Desktop Services by executing code remotely on a Remote Desktop Server, no RDS CAL is required.
  • If you want to institute a BYOD program, where users can bring whatever client device they wish into the office and use it to access your VDI, you’ll probably need some of the new Microsoft CDL licenses.

If I’ve overlooked anything, feel free to submit questions via comments on this post, and we’ll try to get them answered. Let the discussion begin!

Top Ten VDI Mistakes (According to Dan Feller)

Dan Feller is a Lead Architect with the Citrix Consulting group, and has written extensively about XenDesktop. We found his series on the top ten mistakes people make when implementing desktop virtualization to be quite enlightening. In case you missed it, we thought we’d share his “top ten” list here, with links to the individual posts. We would highly recommend that you take the time to read through the series in its entirety:

#10 - Not calculating user bandwidth requirements
Back in the “good old days” of MetaFrame, when we didn’t particularly care about 3D graphics, multimedia content, etc., we could get by with roughly 20 Kbps of network bandwidth per user session. That’s not going to cut it for a virtualized desktop, for a number of reasons that Dan outlines in his blog post. He provides the following estimates for the average bandwidth required both with and without the presence of a pair of Citrix Branch Repeaters (which have some secret sauce that is specifically designed to accelerate Citrix traffic) between the client device and the virtual desktop session:

Parameter XenDesktop Bandwidth without Branch Repeater XenDesktop Bandwidth with Branch Repeater
Office Productivity Apps 43 Kbps 31 Kbps
Internet 85 Kbps 38 Kbps
Printing 553 - 593 Kbps 155 - 180 Kbps
Flash Video (with HDX redirection) 174 Kbps 128 Kbps
Standard WMV Video (with HDX redirection) 464 Kbps 148 Kbps
HD WMV Video (with HDX redirection) 1812 Kbps 206 Kbps

NOTE: These are estimates - your mileage may vary!

One thing that should come across loud and clear from the table above is what a huge difference the Citrix Branch Repeater can make in your bandwidth utilization. And as we’ve always said: you only buy hardware once – bandwidth costs go on forever!

#9 - Not considering the user profile
It should go without saying that user profiles are important. But if it’s number 9 on the list of things people most often screw up, then apparently it doesn’t. In a nutshell: If you mess up the users’ profiles, the users won’t be happy – logon/logoff performance will suffer, settings (including personalization) will be lost. If the users aren’t happy, they will be extremely vocal about it, and your VDI deployment will fail for lack of user buy-in and support. There are some great tools available for managing user profiles, including the Citrix Profile Manager, and the AppSense Environment Manager. AppSense can even maintain a consistent user experience across platforms – making sure that the user profile is the same regardless of whether the user is logged onto a Windows XP system, a Windows 7 System, or a Windows Server 2008 R2-based XenApp server.

Do yourself a favor and make sure you understand what your users’ profile requirements are, then investigate the available tools and plan accordingly.

#8 - Lack of an application virtualization strategy
How many applications are actually deployed in your organization? Do you even know? Are the versions consistent across all users? Which users use which applications? You have to understand the application landscape before you can decide how you’re going to deploy applications in your new virtualized desktop environment.

You have three basic choices on how to deliver apps:

  1. You can install every application into a single desktop image. That means that whenever an application changes, you have to change your base image, and do regression testing to make sure that the new or changed application didn’t break something else.
  2. You can create multiple desktop images with different application sets in each image, depending on the needs of your different user groups. Now if an application changes, you may have to change and do regression testing on multiple images. It’s worth noting that many organizations have been taking this approach in managing PC desktop images for years…but part of the promise of desktop virtualization is that, if done correctly, you can break out of that cycle. But to do that, you must…
  3. Remove the applications from the desktop image and deliver them some other way: either by running them on a XenApp server, or by streaming the application using either the native XenApp streaming technology or Microsoft’s App-V (or some other streaming technology of your choice).

Ultimately, you may end up with a mixed approach, where some core applications that everyone uses are installed in the desktop image, and the rest are virtualized. But, once again, it’s critical to first understand the application landscape within your organization, and then plan (and test) carefully to determine the best application delivery approach.

#7 - Improper resource allocation
Quoting Dan: “Like me, many users only consume a fraction of their total potential desktop computing power, which makes desktop virtualization extremely attractive. By sharing the resources between all users, the overall amount of required resources is reduced. However, there is a fine line between maximizing the number of users a single server can support and providing the user with a good virtual desktop computing experience.”

This post provides some great guidelines on how to optimize the environment, depending on the underlying hypervisor you’re planning to use.

#6 - Protection from Anti-Virus (as well as protection from viruses)
If you are provisioning desktops from a shared read-only image (e.g., Citrix Provisioning Services), then any virus infection will go away when the virtual PC is rebooted, because changes to the base image – including the virus – are discarded by design. But you still need AV protection, because the virus can use the interval between infection and reboot to propagate itself to other systems. The gotcha here is that the AV software itself can cause serious performance issues if it is not configured properly. Dan provides a great outline in this post for how to approach AV protection in a virtual desktop environment.

#5 - Managing the incoming storm
In most organizations, the majority of users arrive and start logging into their desktops at approximately the same time. What you don’t want is dozens, or hundreds, of virtual desktops trying to start up simultaneously, because it will hammer your virtualization environment. There are some very specific things you need to do to survive the “boot storm,” and Dan outlines them in this post.

#4 - Not optimizing the virtual desktop image
Dan provides several tips on things you should do to optimize your desktop image for the virtual environment. He also has specific sections on his blog that deal with recommended optimizations for Windows 7.

#3 - Not spending your cache wisely
Specifically, we’re talking about configuring the system cache on your Provisioning Server appropriately, depending on the OS and amount of RAM in your Provisioning Server, and the type of storage repository you’re using for your vDisk(s).

#2 - Using VDI defaults
Default settings are great for getting a small Proof of Concept up and running quickly. But as you scale up your VDI environment, there are a number of things you should do. If you ignore them, performance will suffer, which means that users will be upset, which means that your VDI project is more likely to fail.

#1 - Improper storage design
This shouldn’t be a surprise, because we’ve written about this before, and even linked to a Citrix TV video of Dan discussing this very thing as part of developing a reference architecture for an SMB (under 500 desktops) deployment. We’re talking here about how to calculate the “functional IOPS” available from a given storage system, and what that means in relation to the number of IOPS a typical user will need at boot time, logon time, working hours (which will vary depending on the users themselves), and logoff time.

Just to round things out, Dan also tossed in a few “honorable mentions,” like the improper use of NIC teaming or not optimizing the NIC configuration in Provisioning Servers, trying to provision images to hardware with mismatched hardware device drivers (generally not an issue if you’re provisioning into a virtual environment), and failing to have a good business reason for launching a VDI project in the first place.

Again, this post was intended to whet your appetite by giving you enough information that you’ll want to read through Dan’s individual “top ten” posts. We would heartily recommend that you do that - you’ll probably learn a lot. (We certainly did!)

How’s That “Cloud” Thing Working For You?

Color me skeptical when it comes to the “cloud computing” craze. Well, OK, maybe my skepticism isn’t so much about cloud computing per se as it is about the way people seem to think it is the ultimate answer to Life, the Universe, and Everything (shameless Douglass Adams reference). In part, that’s because I’ve been around IT long enough that I’ve seen previous incarnations of this concept come and go. Application Service Providers were supposed to take the world by storm a decade ago. Didn’t happen. The idea came back around as “Software as a Service” (or, as Microsoft preferred to frame it, “Software + Services”). Now it’s cloud computing. In all of its incarnations, the bottom line is that you’re putting your critical applications and data on someone else’s hardware, and sometimes even renting their Operating Systems to run it on and their software to manage it. And whenever you do that, there is an associated risk – as several users of Amazon’s EC2 service discovered just last week.

I have no doubt that the forensic analysis of what happened and why will drag on for a long time. Justin Santa Barbara had an interesting blog post last Thursday (April 21) that discussed how the design of Amazon Web Services (AWS), and its segmentation into Regions and Availability Zones, is supposed to protect you against precisely the kind of failure that occurred last week…except that it didn’t.

Phil Wainewright has an interesting post over at ZDnet.com on the “Seven lessons to learn from Amazon’s outage.” The first two points he makes are particularly important: First, “Read your cloud provider’s SLA very carefully” – because it appears that, despite the considerable pain some of Amazon’s customers were feeling, the SLA was not breached, legally speaking. Second, “Don’t take your provider’s assurances for granted” – for reasons that should be obvious.

Wainewright’s final point, though, may be the most disturbing, because it focuses on Amazon’s “lack of transparency.” He quotes BigDoor CEO Keith Smith as saying, “If Amazon had been more forthcoming with what they are experiencing, we would have been able to restore our systems sooner.” This was echoed in Santa Barbara’s blog post where, in discussing customers’ options for failing over to a different cloud, he observes, “Perhaps they would have started that process had AWS communicated at the start that it would have been such a big outage, but AWS communication is – frankly – abysmal other than their PR.” The transparency issue was also echoed by Andrew Hickey in an article posted April 26 on CRN.com.

CRN also wrote about “lessons learned,” although they came up with 10 of them. Their first point is that “Cloud outages are going to happen…and if you can’t stand the outage, get out of the cloud.” They go on to talk about not putting “Blind Trust” in the cloud, and to point out that management and maintenance are still required – “it’s not a ‘set it and forget it’ environment.”

And it’s not like this is the first time people have been affected by a failure in the cloud:

  • Amazon had a significant outage of their S3 online storage service back in July, 2008. Their northern Virginia data center was affected by a lightning strike in July of 2009, and another power issue affected “some instances in its US-EAST-1 availability zone” in December of 2009.
  • Gmail experienced a system-wide outage for a period of time in August, 2008, then was down again for over 1 ½ hours in September, 2009.
  • The Microsoft/Danger outage in October, 2009, caused a lot of T-Mobile customers to lose personal information that was stored on their Sidekick devices, including contacts, calendar entries, to-do lists, and photos.
  • In January, 2010, failure of a UPS took several hundred servers offline for hours at a Rackspace data center in London. (Rackspace also had a couple of service-affecting failures in their Dallas area data center in 2009.)
  • Salesforce.com users have suffered repeatedly from service outages over the last several years.

This takes me back to a comment made by one of our former customers, who was the CIO of a local insurance company, and who later joined our engineering team for a while. Speaking of the ASPs of a decade ago, he stated, “I wouldn’t trust my critical data to any of them – because I don’t believe that any of them care as much about my data as I do. And until they can convince me that they do, and show me the processes and procedures they have in place to protect it, they’re not getting my data!”

Don’t get me wrong – the “Cloud” (however you choose to define it…and that’s part of the problem) has its place. Cloud services are becoming more affordable, and more reliable. But, as one solution provider quoted in the CRN “lessons learned” article put it, “Just because I can move it into the cloud, that doesn’t mean I can ignore it. It still needs to be managed. It still needs to be maintained.” Never forget that it’s your data, and no one cares about it as much as you do, no matter what they tell you. Forrester analyst Rachel Dines may have said it best in her blog entry from last week: “ASSUME NOTHING. Your cloud provider isn’t in charge of your disaster recovery plan, YOU ARE!” (She also lists several really good questions you should ask your cloud provider.)

Cloud technologies can solve specific problems for you, and can provide some additional, and valuable, tools for your IT toolbox. But you dare not assume that all of your problems will automagically disappear just because you put all your stuff in the cloud. It’s still your stuff, and ultimately your responsibility.

The Future Is Now

I recently discovered a video on “Citrix TV” that does as good a job as I’ve ever seen in presenting the big picture of desktop and application virtualization using XenApp and XenDesktop (which, as we’ve said before, includes XenApp now). The entire video is just over 17 minutes long, which is longer than most videos we’ve posted here (I prefer to keep them under 5 minutes or so), but in that 17 minutes, you’re going to see:

  • How easy it is for a user to install the Citrix Receiver
  • Self-service application delivery
  • Smooth roaming (from a PC to a MacBook)
  • Application streaming for off-line use
  • A XenDesktop virtual desktop following the user from an HP Thin Client…
    • …to an iPad…
    • …as the iPad switches to 3G operation aboard a commuter train…
    • …to a Mac in the home office…
    • …to a Windows multi-touch PC in the kitchen…
    • …to an iPhone on the golf course.
  • And a demo of XenClient to wrap things up.

I remember, a few years ago, sitting through the keynote address at a Citrix conference and watching a similar video on where the technology was headed. But this isn’t smoke and mirrors, and it isn’t a presentation of some future, yet-to-be-released technology. All of this functionality is available now, and it’s all included in a single license model. The future is here. Now.

I think you’ll find that it’s 17 minutes that are well-spent: