Tag Archives: Cloud Computing

How Do You Back Up Your Cloud Services?

I recently came across a post on spiceworks.com that, although it’s a couple of years old, makes a great point: “IT professionals would never run on-premise systems without adequate backup and recovery capabilities, so it’s hard to imagine why so many pros adopt cloud solutions without ensuring the same level of protection.”

This is not a trivial issue. According to some articles I’ve read, over 100,000 companies are now using Salesforce.com as their CRM system. Microsoft doesn’t reveal how many Office 365 subscribers they have, but they do reveal their annual revenue run-rate. If you make some basic assumptions about the average monthly fee, you can make an educated guess as to how many subscribers they have, and most estimates place it at over 16 million (users, not companies). Google Apps subscriptions are also somewhere in the millions (they don’t reveal their specific numbers either). If your organization subscribes to one or more of these services, have you thought about backing up that data? Or are you just trusting your cloud service provider to do it for you?

Let’s take Salesforce.com as a specific example. Deleted records normally go into a recycle bin, and are retained and recoverable for 15 days. But there are some caveats there:

  • Your recycle bin can only hold a limited number of records. That limit is 25 times the number of megabytes in your storage. (According to the Salesforce.com “help” site, this usually translates to roughly 5,000 records per license.) For example, if you have 500 Mb of storage, your record limit is 12,500 records. If that limit is exceeded, the oldest records in the recycle bin get deleted, provided they’ve been there for at least two hours.
  • If a “child” record – like a contact or an opportunity – is deleted, and its parent record is subsequently deleted, the child record is permanently deleted and is not recoverable.
  • If the recycle bin has been explicitly purged (which requires “Modify All Data” permissions), you may still be able to get them back using the DataLoader tool, but the window of time is very brief. Specifically how long you have is not well documented, but research indicates it’s around 24 – 48 hours.

A quick Internet search will turn up horror stories of organizations where a disgruntled employee deleted a large number of records, then purged the recycle bin before walking out the door. If this happens to you on a Friday afternoon, it’s likely that by Monday morning your only option will be to contact Salesforce.com to request their help in recovering your data. The Salesforce.com help site mentions that this help is available, and notes that there is a “fee associated” with it. It doesn’t mention that the fee starts at $10,000.

You can, of course, periodically export all of your Salesforce.com data as a (very large) .CSV file. Restoring a particular record or group of records will then involve deleting everything in the .CSV file except the records you want to restore, and then importing them back into Salesforce.com. If that sounds painful to you, you’re right.

The other alternative is to use a third-party backup service, of which there are several, to back up your Salesforce.com data. There are several advantages to using a third-party tool: backups can be scheduled and automated, it’s easier to search for the specific record(s) you want to restore, and you can roll back to any one of multiple restore points. One such tool is Cloudfinder, which was recently acquired by eFolder. Cloudfinder will backup data from Salesforce.com, Office 365, Google Apps, and Box. I expect that list of supported cloud services to grow now that they’re owned by eFolder.

We at VirtualQube are excited about this acquisition because we are an eFolder partner, which means that we are now a Cloudfinder partner as well. For more information on Cloudfinder, or any eFolder product, contact sales@virtualqube.com, or just click the “Request a Quote” button on this page.

Cloud-Based VDI vs. DaaS - Is There a Difference?

With nearly all new technologies in the IT space comes confusion over terminology. Some of the confusion is simply because the technology is new, and we’re all trying to understand how it works and how – or whether – it fits the needs of our businesses. Unfortunately, some of the confusion is often caused by technology vendors who want to find a way to label their products in a way that associates them with whatever is perceived as new, cool, innovative, cutting-edge, etc. Today, we’re seeing that happen with terms like “cloud,” “DaaS,” and “VDI.”

“VDI” stands for Virtual Desktop Infrastructure. Taken literally, it’s an infrastructure that delivers virtual desktops to users. What is a virtual desktop? It is a (usually Windows) desktop computing environment where the user interface is abstracted and delivered to a remote user over a network using some kind of remote display protocol such as ICA, RDP, or PCoIP. That desktop computing environment is most often virtualized using a platform such as VMware, Hyper-V, or XenServer, but could also be a blade PC or even an ordinary desktop PC. If the virtual desktop is delivered by a service provider (such as VirtualQube) for a monthly subscription fee, it is often referred to as “Desktop as a Service,” or “DaaS.”

There are a number of ways to deliver a virtual desktop to a user:

  • Run multiple, individual instances of a desktop operating system (e.g., Windows 7 or Windows 8) on a virtualization host that’s running a hypervisor such as VMware, Hyper-V, or XenServer. Citrix XenDesktop, VMware View, and Citrix VDI-in-a-Box are all products that enable this model.
  • Run multiple, individual instances of a server operating system (e.g., 2008 R2 of 2012 R2) on a virtualization host that’s running a hypervisor such as VMware, Hyper-V, or XenServer. In such a case, a policy pack can be applied that will make the 2008 R2 desktop look like Windows 7, and the 2012 R2 desktop look like Windows 8. In a moment we’ll discuss why you might want to do that.
  • Run multiple, individual desktops on a single, shared server operating system, using Microsoft Remote Desktop Services (with or without added functionality from products such as Citrix XenApp). This “remote session host,” to use the Microsoft term, can be a virtual server or a physical server. Once again, the desktop can be made to look like a Windows 7 or Windows 8 desktop even though it’s really a server OS.
  • Use a brokering service such as XenDesktop to allow remote users to connect to blade PCs in a data center, or even to connect to their own desktop PCs when they’re out of the office.
  • Use client-side virtualization to deliver a company-managed desktop OS instance that will run inside a virtualized “sandbox” on a client PC, such as is the case with Citrix XenClient, or the Citrix Desktop Player for Macintosh. In this case, the virtual desktop can be cached on the local device’s hard disk so it can continue to be accessed after the client device is disconnected from the network.

Although any of the above approaches could lumped into the “VDI” category, the common usage that seems to be emerging is to use the term “VDI” to refer specifically to approaches that deliver an individual operating system instance (desktop or server) to each user. From a service provider perspective, we would characterize that as cloud-based VDI. So, to answer the question we posed in the title of this post, cloud-based VDI is one variant of DaaS, but not all DaaS is delivered using cloud-based VDI – and for a good reason.

Microsoft has chosen not to put its desktop operating systems on the Service Provider License Agreement (“SPLA”). That means there is no legal way for a service provider such as VirtualQube to provide a customer with a true Windows 7 or Windows 8 desktop and charge by the month for it. The only way that can be done is for the customer to purchase all the licenses that would be required for their own on-site VDI deployment (and we’ve written extensively about what licenses those are), and provide those licenses to the service provider, which must then provision dedicated hardware for that customer. That hardware cannot be used to provide any services to any other customer. (Anyone who tells you that there’s any other way to do this is either not telling you the truth, or is violating the Microsoft SPLA!)

Unfortunately, the requirement for dedicated hardware will, in many cases, make the solution unaffordable. Citrix recently published the results of a survey of Citrix Service Providers. They received responses from 718 service providers in 25 countries. 70% of them said that their average customer had fewer than 100 employees. 40% said their average customer had fewer than 50 employees. It is simply not cost-effective for a service provider to dedicate hardware to a customer of that size, and unlikely that it could be done at a price the customer would be willing to pay.

On the other hand, both Microsoft and Citrix have clean, easy-to-understand license models for Remote Desktop Services and XenApp, which is the primary reason why nearly all service providers, including VirtualQube, use server-hosted desktops as their primary DaaS delivery method. We all leverage the policy packs that can make a Server 2008 R2 desktop look like a Windows 7 desktop, and a 2012 R2 desktop look like a Windows 8 desktop, but you’re really not getting Windows 7 or Windows 8, and Microsoft is starting to crack down on service providers who fail to make that clear.

Unfortunately, there are still some applications out there that will not run well – or will not run at all – in a remote session hosted environment. There are a number of reasons for this:

  • Some applications check for the OS version as part of their installation routines, and simply abort the installation if you’re trying to install them on a server OS.
  • Some applications will not run on a 64-bit platform – and Server 2008 R2 and 2012 R2 are both exclusively 64-bit platforms.
  • Some applications do not follow proper programming conventions, and insist on doing things like writing temp files to a hard-coded path like C:\temp. If you have multiple users running that application on the same server via Remote Desktop Services, and each instance of the application is trying to write to the same temp file, serious issues will result. Sometimes we can use application isolation techniques to redirect the writes to a user-specific path, but sometimes we can’t.
  • Some applications are so demanding in terms of processor and RAM requirements that anyone else trying to run applications on the same server will experience degraded performance.

There’s not much that a service provider can do to address the first two of these issues, short of going the dedicated-hardware route (for those customers who are large enough to afford it) and provisioning true Windows 7 or Windows 8 desktops. But there is a creative solution for the third and fourth issues, and that’s to use VDI technology to provision individual instances of Server 2008 R2 or Server 2012 R2 per user. From the licensing perspective, it’s no different than supporting multiple users on a remote session host. Once the service provider has licensed a virtualization host for Windows Datacenter edition, there is no limit to the number of Windows Server instances that can be run on that host – you can keep spinning them up until you don’t like the performance anymore. And the Citrix and Microsoft user licensing is the same whether the user has his/her own private server instance, or is sharing the server OS instance with several other users via Remote Desktop Services.

On the positive side, this allows an individual user to be guaranteed a specified amount of CPU and RAM to handle those resource-intensive applications, avoids “noisy neighbor” issues where a single user impacts the performance of other users who happen to be sharing the same Remote Desktop Server, and allows support of applications that just don’t want to run in a multi-user environment. It’s even possible to give the user the ability to install his/her own applications – this may be risky in that the user could break his/her own virtual server instance, but at least the user can’t affect anyone else.

On the negative side, this is a more expensive alternative simply because it is a less efficient way to use the underlying virtualization host. Our tests indicate that we can probably support an average of 75 individual virtual instances of Server 2008 or Server 2012 for VDI on a dual-processor virtualization host with, say, 320 Gb or so of RAM. We can support 200 – 300 concurrent users on the same hardware by running multiple XenApp server instances on it rather than an OS instance per user.

That said, we believe there are times when the positives of cloud-based VDI is worth the extra money, which is why we offer both cloud-based VDI and remote session hosted DaaS powered by Remote Desktop Services and XenApp.

Why Desktop as a Service?

This morning, I ran across an interesting article over on techtarget.com talking about the advantages of the cloud-hosted desktop model. Among other things, it listed some of the reasons why businesses are deploying DaaS, which align quite well with what we’ve experienced:

  • IaaS - Businesses are finding that as they move their data and server applications into the cloud, the user experience can degrade, because they’re moving farther and farther away from the clients and users who access them. That’s reminiscent of our post a few months ago about the concept of “Data Gravity.” In that post, we made reference to the research by Jim Gray of Microsoft, who concluded that, compared to the cost of moving bytes around, everything else is essentially free. Our contention is that your application execution platform should be wherever your data is. If your data is in the cloud, it just makes sense to have a cloud-hosted desktop to run the applications that access that data.
  • Seasonality - Businesses whose employee count varies significantly over the course of the year may find that the pay-as-you-go model of DaaS makes more sense than building an on-site infrastructure that will handle the seasonal peak.
  • DR/BC - This can be addressed two ways: First, simply having your data and applications in a state-of-the-art data center gives you protection against localized disasters at your office location. If your cloud hosting provider offers data replication to geo-redundant data centers, that’s even better, because you’re also protected against a catastrophic failure of the data center as well. Second, you can replicate the data (and, optionally, even replicate server images) from your on-site infrastructure to a cloud storage repository, and have your hosting provider provision servers and desktops on demand in the event of a disaster - or, although this would cost a bit more, have them already provisioned so they could simply be turned on.
  • Cost - techtarget.com points out that DaaS allows businesses to gain the benefits of virtual desktops without having to acquire the in-house knowledge and skills necessary to deploy VDI themselves. While this is a true statement, it may be difficult to build a reliable ROI justification around it. We’ve found that it often is possible to see a positive ROI if you compare the cost of doing a “forklift upgrade” of servers and server software to the cost of simply moving everything to the cloud and never buying servers or server software again.

It’s worth taking a few minutes to read the entire article on techtarget.com (note - registration may be required to access some content). And, of course, it’s always nice to know we’re not the only ones who think there are some compelling advantages to cloud-hosted desktops!

Windows XP - Waiting for the Other Shoe to Drop

As nearly everyone knows, Microsoft ended all support for Windows XP on April 8. To Microsoft’s credit, they chose to include Windows XP in the emergency patch that they pushed out last night for the “zero day” IE/Flash vulnerability, even though they didn’t have to, and had initially indicated that they wouldn’t. (Of course, the bad press that would have ensued had they not done so would have been brutal. Still, kudos to them for doing it. Given that so many of us criticize them when they do something wrong, it’s only fair that we recognize them when they do something right.)

But what about next time?

The fact is that if you are still running Windows XP on any PC that has access to the Internet, your business is at risk – and that risk will increase as time goes on. The IE/Flash issue should be a huge wake-up call to that effect.

Windows XP was a great operating system, and met the needs of most businesses for many, many years. However, Windows 7 and Windows 8 really are inherently more secure than Windows XP. Moreover, the realities of the software business are such that no vendor, including Microsoft, can continue to innovate and create new and better products while simultaneously supporting old products indefinitely. The “End of Life” (EOL) date for WinXP was, in fact, postponed multiple times by Microsoft, but at some point they had to establish a firm date, and April 8 was that date. The patch that was pushed out last night may be the last one we see for WinXP. When the next major vulnerability is discovered – and it’s “when,” not “if” – you may find that you’re on your own.

Moving forward, it’s clear that you need to get Windows XP out of your production environment. The only exception to this would be a system that’s isolated from the Internet and used for a specific purpose such as running a particular manufacturing program or controlling a piece of equipment. Unfortunately, a lot of the Windows XP hardware out there simply will not support Windows 7 or Windows 8 – either because it’s underpowered, or because drivers are not available for some of the hardware components. So some organizations are faced with the prospect of writing a big check that they weren’t prepared to write for new hardware if they want to get off of Windows XP altogether – and telling them that they had plenty of warning and should have seen this coming may be true, but it isn’t very helpful. Gartner estimates that between 20 and 25 percent of enterprise systems are still running XP, so we’re talking about a lot of systems that need to be dealt with.

Toby Wolpe has a pretty good article over on zdnet.com about 10 steps organizations can take to cut security risks while completing the migration to a later operating system. The most sobering one is #9 – “Plan for an XP breach,” because if you keep running XP, you will eventually be compromised…so you may as well plan now for how you’re going to react to contain the damage and bring things back to a known-good state.

One suggestion we would add to Toby’s list of 10 is to consider moving to the cloud. Many of the actions on Toby’s list are intended to lock the system down by restricting apps, removing admin rights, disabling ports and drives, etc., which may make the system safer, but will also impact usability. However, a tightly locked-down XP system might make an acceptable client device for accessing a cloud hosted desktop. Alternately, you could wipe the XP operating system and install specialized software (generally Linux-based) that essentially turns the hardware into a thin client device.

But the one thing you cannot do is nothing. In the words of Gartner fellow Neil MacDonald (quoted in Toby’s article), “we do not believe that most organizations – or their auditors – will find this level of risk acceptable.”

So You Want to Be a Hosting Provider? (Part 3)

In Part 1 of this series, we discussed the options available to aspiring hosting providers:

  1. Buy hardware and build it yourself.
  2. Rent hardware and build it yourself.
  3. Rent VMs (e.g., Amazon, Azure) and build it yourself.
  4. Partner with someone who has already built it.

We went on to address the costs and other considerations of buying or renting hardware.

Then, in Part 2, we discussed using the Amazon EC2 cloud, with cost estimates based on the pricing tool that Citrix provides as part of the Citrix Service Provider program. We stressed that Amazon has built a great platform for building a hosting infrastructure for thousands of users, provided that you’ve got the cash up front to pay for reserved instances, and that your VMs only need to run for an average of 14 hours per day.

Our approach is a little different.

First, we believe that VARs and MSPs need a platform that will do an excellent job for their smaller customers – particular those who do not have a large staff of IT professionals, or those who are using what AMI Partners, in a study they did on behalf of Microsoft, referred to as an “Involuntary IT Manager” (IITM). These are the people who end up managing their organizations’ IT infrastructures because they have an interest in technology, or perhaps because they just happen to be better at it than anyone else in the organization, but who have other job responsibilities unrelated to IT. Often these individuals are senior managers, partners, or owners, and in nearly all cases could bring more value to the organization if they could spend 100% of their time doing what they were originally hired to do. Getting rid of on-site servers and moving data and applications to a private hosted cloud will allow these people to regain that lost productivity.

Second, we believe that most of these customers are going to need access to their cloud infrastructure on a 24/7 basis. Smaller companies tend to be headed by entrepreneurial people who don’t work traditional hours, and who tend to hire managers who also don’t work traditional hours. Turning their systems off for 10 hours per day to save on run-time costs simply isn’t going to be acceptable.

Third, we believe that the best mix of security and cost-effectiveness for most customers is to have a multi-tenant Active Directory, Exchange, and SharePoint infrastructure, but to dedicate one or more XenApp server(s) to each customer, along with a file server and whatever other application servers they may require (e.g., SQL Server, accounting server, etc.). This is done not only for security reasons, but to avoid “noisy neighbor” problems from poorly behaved applications (or users).

In VirtualQube’s multi-tenant hosting infrastructure, each customer is a separate Organizational Unit (OU) in our Active Directory. Each customer’s servers are in a separate OU, and are isolated on a customer-specific vLAN. Access from the public Internet is secured with a common Watchguard perimeter firewall and a Citrix NetScaler SSL/VPN appliance. Multi-tenant customers who need a permanent VPN connection to one or more office locations can have their own Internet port and their own firewall.

We also learned early on that some customers prefer not to participate in any kind of multi-tenant infrastructure, and others are prevented from doing so by security and compliance regulations. To accommodate these customers, we provision completely isolated environments with their own Domain Controllers, Exchange Servers, etc. A customer that does not participate in our multi-tenant infrastructure always gets a customer-specific firewall and NetScaler, and customer-specific Domain Controllers. At their option, they can still use our multi-tenant Exchange Server, or have their own.

Finally, we believe that many VARs and MSPs will benefit from prescriptive guidance for not just how to build a hosting infrastructure, but how to sell it. That’s why our partners have access to a document template library that covers how to do the necessary discovery to properly scope a cloud project, how to determine what cloud resources will be required and how to price out a customized private hosted cloud environment, how to position the solution to the customer, how to write the final proposal, how to handle customer data migration, and much, much more.

We believe that partnering with VirtualQube makes sense for VARs and MSPs because that’s the world we came from. Our hosting platform was built by a VAR/MSP for VARs/MSPs, and we used every bit of the experience we gained from twenty years of working with Citrix technology. That’s the VirtualQube difference.