Tag Archives: Virtualization

DataCore Releases “State of Software Defined Storage” Report

Last week, we wrote about DataCore’s recent release of v10.0 of their flagship SANsymphony-V software-defined storage product. The features and functionality in the new release were, no doubt, driven in part by the research DataCore has done, as reflected in their recently released fourth annual survey of global IT professionals - conducted to identify current storage challenges and the forces that are driving demand for software-defined storage. Here are some of the highlights from that survey, which can be downloaded in its entirety from the DataCore Web site:

  • 41% of respondents said that a primary factor that impedes organizations from considering different models and manufacturers of storage devices was the plethora of tools required to manage them.
  • 37% of respondents said that the difficulty of migrating between different models and generations of storage devices was also a major impediment.
  • 39% of respondents said that these issues were not a concern for them, because they were utilizing independent storage virtualization software to pool different devices and models of storage devices from different manufacturers and manage them centrally.
  • Despite all the talk about the “all flash data center,” more than half of the respondents (63%) said that they currently have less than 10% of their capacity assigned to flash storage.
  • Nearly 40% of respondents said they were not planning to use flash or SSDs for server virtualization projects because of cost concerns.
  • 23% of respondents ranked performance degradation or the inability to meet performance expectations as the most serious obstacle when virtualizing server workloads; 32% ranked it as somewhat of an obstacle.
  • The highest-ranking reasons that organizations deployed storage virtualization software were the improvement of disaster recovery and business continuity (32%) and the ability to enable storage capacity expansion without disruption (30%).

DataCore is a pioneer and market leader in software-defined storage. Read more about DataCore and VirtualQube at www.VirtualQube.com/DataCore.

Is It Time to Upgrade Your DataCore SANsymphony-V?

A few months ago, DataCore released SANsymphony-V 10.0. If you’re running an earlier version of SANsymphony-V, there are several reasons why you might want to start planning your upgrade. There are some great new features in v10, and we’ll get to those in a moment, but you should also bear in mind that DataCore’s support policy is to support the current full release (v10) and the release just previous to the current full release (v9.x). Support for v8.x officially ends on December 31, 2014, and support for v7.x ended last June.

That doesn’t mean DataCore won’t help you if you have a problem with an earlier version. It does mean that their obligation is limited to “best effort” support, and does not extend to bug fixes, software updates, or root-cause analysis of issues you may run into. So, if you’re on anything earlier than v9.x, you really should talk to us about upgrading.

But even if you’re on v9.x, there are some good reasons why you may want to upgrade to 10.0:

  • Scalability has doubled from 16 to a maximum of 32 nodes.
  • Supports high-speed 40/56 GbE iSCSI, 16 Gb Fibre Channel, and iSCSI target NIC teaming.
  • Performance visualization/heat map tools to give you better insight into the behavior of flash and disk storage tiers.
  • New auto-tiering settings to optimize expensive resources like flash cards.
  • Intelligent disk rebalancing to dynamically redistribute the load across available disks within a storage tier.
  • Automated CPU load leveling and flash optimization.
  • Disk pool optimization and self-healing storage - disk contents are automatically restored across the remaining storage in the pool.
  • New self-tuning caching algorithms and optimizations for flash cards and SSDs.
  • Simple configuration wizards to rapidly set up different use cases.

And if that’s not enough, v10 now allows you to provision high-performance virtual SANs that can scale to more than 50 million IOPS and up to 32 Petabytes of capacity across a cluster of 32 servers. Not sure whether a virtual SAN can deliver the performance you need? They’ll give you a free virtual SAN for non-production evaluation use.

Check out this great overview of software-defined storage virtualization:

Cloud-Based VDI vs. DaaS - Is There a Difference?

With nearly all new technologies in the IT space comes confusion over terminology. Some of the confusion is simply because the technology is new, and we’re all trying to understand how it works and how – or whether – it fits the needs of our businesses. Unfortunately, some of the confusion is often caused by technology vendors who want to find a way to label their products in a way that associates them with whatever is perceived as new, cool, innovative, cutting-edge, etc. Today, we’re seeing that happen with terms like “cloud,” “DaaS,” and “VDI.”

“VDI” stands for Virtual Desktop Infrastructure. Taken literally, it’s an infrastructure that delivers virtual desktops to users. What is a virtual desktop? It is a (usually Windows) desktop computing environment where the user interface is abstracted and delivered to a remote user over a network using some kind of remote display protocol such as ICA, RDP, or PCoIP. That desktop computing environment is most often virtualized using a platform such as VMware, Hyper-V, or XenServer, but could also be a blade PC or even an ordinary desktop PC. If the virtual desktop is delivered by a service provider (such as VirtualQube) for a monthly subscription fee, it is often referred to as “Desktop as a Service,” or “DaaS.”

There are a number of ways to deliver a virtual desktop to a user:

  • Run multiple, individual instances of a desktop operating system (e.g., Windows 7 or Windows 8) on a virtualization host that’s running a hypervisor such as VMware, Hyper-V, or XenServer. Citrix XenDesktop, VMware View, and Citrix VDI-in-a-Box are all products that enable this model.
  • Run multiple, individual instances of a server operating system (e.g., 2008 R2 of 2012 R2) on a virtualization host that’s running a hypervisor such as VMware, Hyper-V, or XenServer. In such a case, a policy pack can be applied that will make the 2008 R2 desktop look like Windows 7, and the 2012 R2 desktop look like Windows 8. In a moment we’ll discuss why you might want to do that.
  • Run multiple, individual desktops on a single, shared server operating system, using Microsoft Remote Desktop Services (with or without added functionality from products such as Citrix XenApp). This “remote session host,” to use the Microsoft term, can be a virtual server or a physical server. Once again, the desktop can be made to look like a Windows 7 or Windows 8 desktop even though it’s really a server OS.
  • Use a brokering service such as XenDesktop to allow remote users to connect to blade PCs in a data center, or even to connect to their own desktop PCs when they’re out of the office.
  • Use client-side virtualization to deliver a company-managed desktop OS instance that will run inside a virtualized “sandbox” on a client PC, such as is the case with Citrix XenClient, or the Citrix Desktop Player for Macintosh. In this case, the virtual desktop can be cached on the local device’s hard disk so it can continue to be accessed after the client device is disconnected from the network.

Although any of the above approaches could lumped into the “VDI” category, the common usage that seems to be emerging is to use the term “VDI” to refer specifically to approaches that deliver an individual operating system instance (desktop or server) to each user. From a service provider perspective, we would characterize that as cloud-based VDI. So, to answer the question we posed in the title of this post, cloud-based VDI is one variant of DaaS, but not all DaaS is delivered using cloud-based VDI – and for a good reason.

Microsoft has chosen not to put its desktop operating systems on the Service Provider License Agreement (“SPLA”). That means there is no legal way for a service provider such as VirtualQube to provide a customer with a true Windows 7 or Windows 8 desktop and charge by the month for it. The only way that can be done is for the customer to purchase all the licenses that would be required for their own on-site VDI deployment (and we’ve written extensively about what licenses those are), and provide those licenses to the service provider, which must then provision dedicated hardware for that customer. That hardware cannot be used to provide any services to any other customer. (Anyone who tells you that there’s any other way to do this is either not telling you the truth, or is violating the Microsoft SPLA!)

Unfortunately, the requirement for dedicated hardware will, in many cases, make the solution unaffordable. Citrix recently published the results of a survey of Citrix Service Providers. They received responses from 718 service providers in 25 countries. 70% of them said that their average customer had fewer than 100 employees. 40% said their average customer had fewer than 50 employees. It is simply not cost-effective for a service provider to dedicate hardware to a customer of that size, and unlikely that it could be done at a price the customer would be willing to pay.

On the other hand, both Microsoft and Citrix have clean, easy-to-understand license models for Remote Desktop Services and XenApp, which is the primary reason why nearly all service providers, including VirtualQube, use server-hosted desktops as their primary DaaS delivery method. We all leverage the policy packs that can make a Server 2008 R2 desktop look like a Windows 7 desktop, and a 2012 R2 desktop look like a Windows 8 desktop, but you’re really not getting Windows 7 or Windows 8, and Microsoft is starting to crack down on service providers who fail to make that clear.

Unfortunately, there are still some applications out there that will not run well – or will not run at all – in a remote session hosted environment. There are a number of reasons for this:

  • Some applications check for the OS version as part of their installation routines, and simply abort the installation if you’re trying to install them on a server OS.
  • Some applications will not run on a 64-bit platform – and Server 2008 R2 and 2012 R2 are both exclusively 64-bit platforms.
  • Some applications do not follow proper programming conventions, and insist on doing things like writing temp files to a hard-coded path like C:\temp. If you have multiple users running that application on the same server via Remote Desktop Services, and each instance of the application is trying to write to the same temp file, serious issues will result. Sometimes we can use application isolation techniques to redirect the writes to a user-specific path, but sometimes we can’t.
  • Some applications are so demanding in terms of processor and RAM requirements that anyone else trying to run applications on the same server will experience degraded performance.

There’s not much that a service provider can do to address the first two of these issues, short of going the dedicated-hardware route (for those customers who are large enough to afford it) and provisioning true Windows 7 or Windows 8 desktops. But there is a creative solution for the third and fourth issues, and that’s to use VDI technology to provision individual instances of Server 2008 R2 or Server 2012 R2 per user. From the licensing perspective, it’s no different than supporting multiple users on a remote session host. Once the service provider has licensed a virtualization host for Windows Datacenter edition, there is no limit to the number of Windows Server instances that can be run on that host – you can keep spinning them up until you don’t like the performance anymore. And the Citrix and Microsoft user licensing is the same whether the user has his/her own private server instance, or is sharing the server OS instance with several other users via Remote Desktop Services.

On the positive side, this allows an individual user to be guaranteed a specified amount of CPU and RAM to handle those resource-intensive applications, avoids “noisy neighbor” issues where a single user impacts the performance of other users who happen to be sharing the same Remote Desktop Server, and allows support of applications that just don’t want to run in a multi-user environment. It’s even possible to give the user the ability to install his/her own applications – this may be risky in that the user could break his/her own virtual server instance, but at least the user can’t affect anyone else.

On the negative side, this is a more expensive alternative simply because it is a less efficient way to use the underlying virtualization host. Our tests indicate that we can probably support an average of 75 individual virtual instances of Server 2008 or Server 2012 for VDI on a dual-processor virtualization host with, say, 320 Gb or so of RAM. We can support 200 – 300 concurrent users on the same hardware by running multiple XenApp server instances on it rather than an OS instance per user.

That said, we believe there are times when the positives of cloud-based VDI is worth the extra money, which is why we offer both cloud-based VDI and remote session hosted DaaS powered by Remote Desktop Services and XenApp.

Countdown to July 14, 2015

In case you haven’t heard, Microsoft will end support for Windows Server 2003 on July 14, 2015. A quick glance at the calendar will confirm that this is now less than a year away. So this is your friendly reminder that if you are still running 2003 servers in production, and you haven’t yet begun planning how you’re going to replace them, you darn well better start soon. Here are a few questions to get you started:

  • Are those 2003 servers already virtualized, or do you still have physical servers that will need to be retired/replaced?
  • If you have physical 2003 servers, do you have a virtualized infrastructure that you can use for their replacements? (If not, this is a great opportunity to virtualize. If so, do you have enough available capacity on your virtualization hosts? How about storage capacity on your SAN?)
  • Can the application workloads on those 2003 servers be moved to 2008 or 2012 servers? If not, what are your options for upgrading those applications to something that will run on a later server version?
  • What impact will all this have on your 2015 budget? Have you already budgeted for this? If not, do you still have time to get this into your next budget?
  • Would it make more sense from a budget perspective to move those application workloads to the cloud instead of purchasing server upgrades? (Maybe a monthly operating expense will be easier to deal with than the capital expenditure of purchasing the upgrades.)

According to Microsoft, there are more than 9 million 2003 servers still in production worldwide…and the clock is ticking. How many of the 9 million are yours?

Does “Shared Nothing” Migration Mean the Death of the SAN?

You’ve probably heard that Hyper-V in Windows Server 2012 supports what Microsoft is calling “Shared Nothing” live migration. You can see a demo of that here, in a video that was posted on a TechNet blog back in July:

Now don’t get me wrong - the ability to live migrate a running VM from one virtualization host to another across the network with no shared storage behind it is pretty cool. But if you read through the blog post, you’ll also see that it took 8 minutes and 40 seconds to migrate a 16 Gb VM. (And I don’t know about you, but many of our customers have VMs that are substantially larger than that!) On the other hand, it took only 11 seconds to live migrate that same VM running on the same hardware when it was in a cluster with shared storage.

So I will submit that the answer to the question posed in the title of this post is “No” - clearly, having shared storage behind your virtualization hosts brings a level of resilience and agility far beyond what Shared Nothing migration brings. Still, for an SMB that has a small virtualization infrastructure with only two or three hosts and no shared storage, it’s a significant improvement over what they’ve historically had to go through to move a VM from one host to another: That has typically meant shutting the VM down, then exporting it to a storage repository that can be accessed by the other host (e.g., an external USB or network-attached hard drive), then importing it into the other host’s local storage, then booting it up…that can easily take an hour or more, during which time the VM is shut down and unavailable.

So Shared Nothing migration is pretty cool, but, as Rob Waggoner writes in the TechNet post linked above, don’t throw your SANs out just yet.