Author Archives: Sid Herron

Sid Herron has been involved with Citrix technology since the days of WinFrame (the NT v3.51 product). He has held a number of sales and marketing management positions within the company, and currently serves as VirtualQube's Director of Sales and Marketing.

The Case for Office 365

Update - May 7, 2015
In the original post below, we talked about the 20,000 “item” limit in OneDrive for Business. It turns out that even our old friend and Office 365 evangelist Harry Brelsford, founder of SMB Nation, and, more recently, O365 Nation, has now run afoul of this obstacle, as he describes in his blog post from May 5.

Turns out there’s another quirk with OneDrive for Business that Harry didn’t touch on in his blog (nor did we in our original post below) - OneDrive for Business is really just a front end for a Microsoft hosted SharePoint server. “So what?” you say. Well, it turns out that there are several characters that are perfectly acceptable for you to use in a Windows file or folder name that are not acceptable in a file or folder name on a SharePoint server. (For the definitive list of what’s not acceptable, see https://support.microsoft.com/en-us/kb/905231.) And if you’re trying to sync thousands of files with your OneDrive for Business account and a few of them have illegal characters in their names, the sync operation will fail and you will get to play the “find-the-file-with-the-illegal-file-name” game, which can provide you with hours of fun…

Original Post Follows
A year ago, in a blog post targeted at prospective hosting providers, we said, “…in our opinion, selling Office 365 to your customers is not a cloud strategy. Office 365 may be a great fit for customers, but it still assumes that most computing will be done on a PC (or laptop) at the client endpoint, and your customer will still, in most cases, have at least one server to manage, backup, and repair when it breaks.”

About the same time, we wrote about the concept of “Data Gravity” - that, just as objects with physical mass exhibit inertia and attract one another in accordance with the law of gravity, large chunks of data also exhibit a kind of inertia and tend to attract other related data and the applications required to manipulate that data. This is due in part to the fact that (according to former Microsoft researcher Jim Gray) the most expensive part of computing is the cost of moving data around. It therefore makes sense that you should be running your applications wherever your data resides: if your data is in the Cloud, it can be argued that you should be running your applications there as well – especially apps that frequently have to access a shared set of back-end data.

Although these are still valid points, they do not imply that Office 365 can’t bring significant value to organizations of all sizes. There is a case to be made for Office 365, so let’s take a closer look at it:

First, Office 365 is, in most cases, the most cost-effective way to license the Office applications, especially if you have fewer than 300 users (which is the cut-off point between the “Business” and “Enterprise” O365 license plans). Consider that a volume license for Office 2013 Pro Plus without Software Assurance under the “Open Business” license plan costs roughly $500. The Office 365 Business plan – which gets you just the Office apps without the on-line services – costs $8.25/month. If you do the math, you’ll see that $500 would cover the subscription cost for five years.

But wait – that’s really not an apples-to-apples comparison, because with O365 you always have access to the latest version of Office. So we should really be comparing the O365 subscription cost to the volume license price of Office with Software Assurance, which, under the Open Business plan, is roughly $800 for the initial purchase, which includes two years of S.A., and $295 every two years after that to keep the S.A. in place. Total four-year cost under Open Business: $1,095. Total four-cost under the Office 365 Business plan: $396. Heck, even the Enterprise E3 plan (at $20/month) is only $960 over four years.

But (at the risk of sounding like a late-night cable TV commercial) that’s still not all! Office 365 allows each user to install the Office applications on up to five different PCs or Macs and up to five tablets and five smart phones. This is the closest Microsoft has ever come to per-user licensing for desktop applications, and in our increasingly mobile world where nearly everyone has multiple client devices, it’s an extremely attractive license model.

Second, at a price point that is still less than comparable volume licensing over a four-year period, you can also get Microsoft Hosted Exchange, Hosted SharePoint, OneDrive for Business, Hosted Lync for secure instant messaging and Web conferencing, and (depending on the plan) unlimited email archiving and eDiscovery tools such as the ability to put users and/or SharePoint document libraries on discovery hold and conduct global searches across your entire organization for relevant Exchange, Lync, and SharePoint data. This can make the value proposition even more compelling.

So what’s not to like?

Well, for one thing, email retention in Office 365 is not easy and intuitive. As we discussed in our recent blog series on eDiscovery, when an Outlook user empties the Deleted Items folder, or deletes a single item from it, or uses Shift+Delete on an item in another folder (which bypasses the Deleted Items folder), that item gets moved to the “Deletions” subfolder in a hidden “Recoverable Items” folder on the Exchange server. As the blog series explains, these items can still be retrieved by the user as long as they haven’t been purged. By default, they will be purged after two weeks. Microsoft’s Hosted Exchange service allows you to extend that period (the “Deleted Items Retention Period”), but only to a maximum of 30 days – whereas if you are running your own Exchange server, you can extend the period to several years.

But the same tools that allow a user to retrieve items from the Deletions subfolder will also allow a user to permanently purge items from that subfolder. And once an item is purged from the Deletions subfolder – whether explicitly by the user or by the expiration of the Deleted Items Retention Period – that item is gone forever. The only way to prevent this from happening is to put the user on Discovery Hold (assuming you’ve subscribed to a plan which allows you to put users on Discovery Hold), and, unfortunately, there is currently no way to do a bulk operation in O365 to put multiple users on Discovery Hold – you must laboriously do it one user at a time. And if you forget to do it when you create a new user, you run the risk of having that user’s email messages permanently deleted (whether accidentally or deliberately) with no ability to recover them if, Heaven forbid, you ever find yourself embroiled in an eDiscovery action.

One way around this is to couple your Office 365 plan with a third-party archiving tool, such as Mimecast. Although this obviously adds expense, it also adds another layer of malware filtering, an unlimited archive that the user cannot alter, a search function that integrates gracefully into Outlook, and an email continuity function that allows you to send/receive email directly via a Mimecast Web interface if the Office 365 Hosted Exchange service is ever unavailable. You can also use a tool like eFolder’s CloudFinder to back up your entire suite of Office 365 data – documents as well as email messages.

And then there’s OneDrive. You might be able, with a whole lot of business process re-engineering, to figure out how to move all of your file storage into Office 365′s Hosted SharePoint offering. Of course, there would then be no way to access those files unless you’re on-line. Hence the explosive growth in the business-class cloud file synchronization market - where you have a local folder (or multiple local folders) that automatically synchronizes with a cloud file repository, giving you the ability to work off-line and, provided you’ve saved your files in the right folder, synchronize those files to the cloud repository the next time you connect to the Internet. Microsoft’s entry in this field is OneDrive for Business…but there is a rather serious limitation in OneDrive for Business as it exists today.

O365′s 1 Tb of Cloud Storage per user sounds like more than you would ever need. But what you may not know is that there is a limit of 20,000 “items” per user (both a folder and a file within that folder are “items”). You’d be surprised at how fast you can reach that limit. For example, there are three folders on my laptop where all of my important work-related files are stored. One of those folders contains files that also need to be accessible by several other people in the organization. The aggregate storage consumed by those three folders is only about 5 Gb – but there are 18,333 files and subfolders in those three folders. If I was trying to use OneDrive for Business to synchronize all those files to the Cloud, I would probably be less than six months away from exceeding the 20,000 item limit.

Could I go through those folders and delete a lot of stuff I no longer need, or archive them off to, say, a USB drive? Sure I could – and I try to do that periodically. I dare say that you probably also have a lot of files hanging around on your systems that you no longer need. But it takes time to do that grooming – and what’s the most precious resource that most of us never have enough of? Yep, time. My solution is to use Citrix ShareFile to synchronize all three of those folders to a Cloud repository. We also offer Anchor Works (now owned by eFolder) for business-class Cloud file synchronization. (And there are good reasons why you might choose one over the other, but they’re beyond the scope of this article.)

The bottom line is that, while Office 365 still may not be a complete solution that will let you move your business entirely to the cloud and get out of the business of supporting on-prem servers, it can be a valuable component of a complete solution. As with so many things in IT, there is not necessarily a single “right” way to do anything. There are multiple approaches, each with pros and cons, and the challenge is to select the right combination of services for a particular business need. We believe that part of the value we can bring to the table is to help our clients select that right combination of services – whether it be a VirtualQube hosted private cloud, a private cloud on your own premise, in your own co-lo, or in a public infrastructure such as Amazon or Azure, or a public/private hybrid cloud deployment – and to help our clients determine whether one of the Office 365 plans should be part of that solution. And if you use the Office Suite at all, the answer to that is probably “yes” - it’s just a matter of which plan to choose.

Hyperconvergence and the Advent of Software Defined Everything (Part 2)

As cravings go, the craving for the perfect morning cup of tea in jolly old England rivals that of the most highly-caffeinated Pacific Northwest latte-addict. So, in the late 1800s, some inventive folks started thinking about what was actually required to get the working man (or woman) out of bed in the morning. An alarm clock, certainly. A lamp of some kind during the darker parts of the year (England being at roughly the same latitude as the State of Washington). And, most importantly, that morning cup of tea. A few patent filings later, the “Teasmade” was born. According to Wikipedia, they reached their peak of popularity in the 1960s and 1970s…although they are now seeing an increase in popularity again, partly as a novelty item. You can buy one on eBay for under $50.

The Teasmade, ladies and gentlemen, is an example of a converged appliance. It integrates multiple components – an alarm clock, a lamp, a teapot – into a pre-engineered solution. And, for it’s time, a pretty clever one, if you don’t mind waking up with a pot of boiling water inches away from your head. The Leatherman multi-tool is another example of a converged appliance. You get pliers, wire cutters, knife blades, phillips-head and flat-head screwdrivers, a can/bottle opener, and, depending on the model, an awl, a file, a saw blade, etc., etc., all in one handy pocket-sized tool. It’s a great invention, and I keep one on my belt constantly when I’m out camping, although it would be of limited use if I had to work on my car.

How does this relate to our IT world? Well, in traditional IT, we have silos of systems and operations management. We typically have separate admin groups for storage, servers, and networking, and each group maintains the architecture and the vendor relationships, and handles purchasing and provisioning for the stuff that group is responsible for. Unfortunately, these groups do not always play nicely together, which can lead to delays in getting new services provisioned at a time when agility is increasingly important to business success.

Converged systems attempt to address this by combining two or more of these components as a pre-engineered solution…components that are chosen and engineered to work well together. One example is the “VCE” system, so called because it is a bundle of VMware, Cisco UCS hardware, and EMC storage.

A “hyperconverged” system takes this concept a step further. It is a modular system from a single vendor that integrates all functions, with a management overlay that allows all the components to be managed via a “single pane of glass.” They are designed to scale by simply adding more modules. They can typically be managed by one team, or, in some cases, one person.

VMware’s EVO:RAIL system, introduced in August of last year, is perhaps the first example of a truly hyperconverged system. VMware has arrangements with several hardware vendors, including Dell, HP, Fujitsu, and even SuperMicro, to build EVO:RAIL on their respective hardware. All vendors’ products include four dual-processor compute nodes with 192 Gb RAM each, one 400 Gb SSD per node (used for caching), and three 1.2 Tb hot-plug disk drives per node, all in a 2U rack-mount chassis with dual hot-plug redundant power supplies. The hardware is bundled with VMWare’s virtualization software, as well as their virtual SAN. The concept is appealing – you plug it in, turn it on, and you’re 15 minutes away from building your first VM. EVO:RAIL can be scaled out to four appliances (today), with plans to increase the number of nodes in the future.

The good news is that it’s fast and simple, it has a small footprint (meaning it enables high-density computing), and places lower demands on power and cooling. Todd Knapp, writing for searchvirtualdesktop.techtarget.com, says, “Hyperconverged infrastructure is a good fit for companies with branch locations or collocated facilities, as well as small organizations with big infrastructure requirements.”

Andy Warfield (from whom I borrowed the Teasmade example), writing in his blog at www.cohodata.com, is a bit more specific: “…converged architectures solve a very real and completely niche problem: at small scales, with fairly narrow use cases, converged architectures afford a degree of simplicity that makes a lot of sense. For example, if you have a branch office that needs to run 10 – 20 VMs and that has little or no IT support, it seems like a good idea to keep that hardware install as simple as possible. If you can do everything in a single server appliance, go for it!”

But Andy also points out some not-so-good news:

However, as soon as you move beyond this very small scale of deployment, you enter a situation where rigid convergence makes little or no sense at all. Just as you wouldn’t offer to serve tea to twelve dinner guests by brewing it on your alarm clock, the idea of scaling cookie-cutter converged appliances begs a bit of careful reflection.

If your environment is like many enterprises that I’ve worked with in the past, it has a big mix of server VMs. Some of them are incredibly demanding. Many of them are often idle. All of them consume RAM. The idea that as you scale up these VMs on a single server, that you will simultaneously exhaust memory, CPU, network, and storage capabilities at the exact same time is wishful thinking to the point of clinical delusion…what value is there in an architecture that forces you to scale out, and to replace at end of life, all of your resources in equal proportion?

Moreover, hyperconverged systems are, at the moment, pretty darned expensive. An EVO:RAIL system will cost you well over six figures, and locks you into a single vendor. Unlike most stand-alone SAN products, VMware’s virtual SAN won’t provision storage to physical servers. And EVO:RAIL is, by definition, VMware only, whereas many enterprises have a mixture of hypervisors in their environment. (According to Todd Knapp, saying “We’re a __________ shop” is just another way of saying “We’re more interested in maintaining homogeneity in the network than in taking advantage of innovations in technology.”) Not to mention the internal political problems: Which of those groups we discussed earlier is going to manage the hyperconverged infrastructure? Does it fall under servers, storage, or networking? Are you going to create a new group of admins? Consolidate the groups you have? It could get ugly.

So where does this leave us? Is convergence, or hyperconvergence, a good thing or not? The answer, as it often is in our industry, is “It depends.” In the author’s opinion, Andy Warfield is exactly right in that today’s hyperconverged appliances address fairly narrow use cases. On the other hand, the hardware platforms that have been developed to run these hyperconverged systems, such as the Fujitsu CX400, have broader applicability. Just think for a moment about the things you could do with a 2U rack-mount system that contained four dual-processor server modules with up to 256 Gb of RAM each, and up to 24 hot-plug disk drives (6 per server module).

We’ve built a number of SMB virtualization infrastructures with two rack-mount virtualization hosts and two DataCore SAN nodes, each of which was a separate 2U server with its own power supplies. Now we can do it in ¼ the rack space with a fraction of the power consumption. Or how about two separate Stratus everRun fault-tolerant server pairs in a single 2U package?

Innovation is nearly always a good thing…but it’s amazing how often the best applications turn out not to be the ones the innovators had in mind.

Hyperconvergence and the Advent of Software-Defined Everything (Part 1)

The IT industry is one of those industries that is rife with “buzz words” – convergence, hyperconvergence, software-defined this and that, etc., etc. It can be a challenge for even the most dedicated IT professionals to keep up on all the new trends in technology, not to mention the new terms invented by marketeers who want you to think that the shiny new product they just announced is on the leading edge of what’s new and cool…when in fact it’s merely repackaged existing technology.

What does it really mean to have “software-defined storage” or “software-defined networking”…or even a “software-defined data center? What’s the difference between “converged” and “hyperconverged?” And why should you care? This series of articles will suggest some answers that we hope will be helpful.

First, does “software-defined” simply mean “virtualized?”

No, not as the term is generally used. If you think about it, every piece of equipment in your data center these days has a hardware component and a software component (even if that software component is hard-coded into specialized integrated circuit chips or implemented in firmware). Virtualization is, fundamentally, the abstraction of software and functionality from the underlying hardware. Virtualization enables “software-defined,” but, as the term is generally used, “software defined” implies more than just virtualization – it implies things like policy-driven automation and a simplified management infrastructure.

An efficient IT infrastructure must be balanced properly between compute resources, storage resources, and networking resources. Most readers are familiar with the leading players in server virtualization, with the “big three” being VMware, Microsoft, and Citrix. Each has its own control plane to manage the virtualization hosts, but some cross-platform management is available. vCenter can manage Hyper-V hosts. System Center can manage vSphere and XenServer hosts. It may not be completely transparent yet, but it’s getting there.

What about storage? Enterprise storage is becoming a challenge for businesses of all sizes, due to the sheer volume of new information that is being created – according to some estimates, as much as 15 petabytes of new information world-wide every day. (That’s 15 million billion bytes.) The total amount of digital data that needs to be stored somewhere doubles roughly every two years, yet storage budgets are increasing only 1% - 5% annually. Hence the interest in being able to scale up and out using lower-cost commodity storage systems.

But the problem is often compounded by vendor lock-in. If you have invested in Vendor A’s enterprise SAN product, and now want to bring in an enterprise SAN product from Vendor B because it’s faster/better/less costly, you will probably find that they don’t talk to one another. Want to move Vendor A’s SAN into your Disaster Recovery site, use Vendor B’s SAN in production, and replicate data from one to the other? Sorry…in most cases that’s not going to work.

Part of the promise of software-defined storage is the ability to not only manage the storage hardware from one vendor via your SDS control plane, but also pull in all of the “foreign” storage you may have and manage it all transparently. DataCore, to cite just one example, allows you to do just that. Because the DataCore SAN software is running on a Windows Server platform, it’s capable of aggregating any and all storage that the underlying Windows OS can see into a single storage pool. And if you want to move your aging EMC array into your DR site, and have your shiny, new Compellent production array replicate data to the EMC array (or vice versa), just put DataCore’s SANsymphony-V in front of each of them, and let the DataCore software handle the replication. Want to bring in an all-flash array to handle the most demanding workloads? Great! Bring it in, present it to DataCore, and let DataCore’s auto-tiering feature dynamically move the most frequently-accessed blocks of data to the storage tier that offers the highest performance.

What about software-defined networking? Believe it or not, in 2013 we reached the tipping point where there are now more virtual switch ports than physical ports in the world. Virtual switching technology is built into every major hypervisor. Major players in the network appliance market are making their technology available in virtual appliance form. For example, Watchguard’s virtual firewall appliances can be deployed on both VMware and Hyper-V, and Citrix’s NetScaler VPX appliances can be deployed on VMware, Hyper-V, or XenServer. But again, “software-defined networking” implies the ability to automate changes to the network based on some kind of policy engine.

If you put all of these pieces together, vendor-agnostic virtualization + policy-driven automation + simplified management = software-defined data center. Does the SDDC exist today? Arguably, yes – one could certainly make the case that the VMware vCloud Automation Center, Microsoft’s Azure Pack, Citrix’s CloudStack, and the open-source OpenStack all have many of the characteristics of a software-defined data center.

Whether the SDDC makes business sense today is not as clear. Techtarget.com quotes Brad Maltz of Lumenate as saying, “It will take about three years for companies to learn about the software-designed data center concept, and about five to ten years for them to understand and implement it.” Certainly some large enterprises may have the resources – both financial and skill-related – to begin reaping the benefits of this technology sooner, but it will be a challenge for small and medium-sized enterprises to get their arms around it. That, in part, is what is driving vendors to introduce converged and hyperconverged products, and that will be the subject of Part 2 of this series.

Windows Server 2003 - Four Months and Counting

Unless you’ve been living in a cave in the mountains for the last several months, you’re probably aware that Windows Server 2003 hits End of Life on July 14, 2015 – roughly four months from now. That means Microsoft will no longer develop or release security patches or fixes for the OS. You will no longer be able to call Microsoft for support if you have a problem with your 2003 server. Yet, astoundingly, only a few weeks ago Microsoft was estimating that there were still over 8 million 2003 servers in production.

Are some of them yours? If so, consider this: As Mike Boyle pointed out in his blog last October, you’re running a server OS that was released the year Facebook creator Mark Zuckerberg entered college; the year Wikipedia was launched; the year Myspace (remember them?) was founded; the year the Tampa Bay Buccaneers won the Super Bowl. Yes, it was that long ago.

Do you have to deal with HIPAA or PCI compliance? What would it mean to your organization if you didn’t pass your next audit? Because you probably won’t if you’re still running 2003 servers. And even if HIPAA or PCI aren’t an issue, what happens when (not if) the next big vulnerabilty is discovered and you have no way to patch for it?

Yes, I am trying to scare you – because this really is serious stuff, and if you don’t have a migration plan yet, you don’t have much time to assemble one. Please, let’s not allow this to become another “you can have it when you pry it from my cold dead hands” scenario like Windows XP. There really is too much at stake here. You can upgrade. You can move to the cloud. Or you can put your business as risk. It’s your call.

Seven Security Risks from Consumer-Grade File Sync Services

[The following is courtesy of Anchor - an eFolder company and a VirtualQube partner.]

Consumer-grade file sync solutions (referred to hereafter as “CGFS solutions” to conserve electrons) pose many challenges to businesses that care about control and visibility over company data. You may think that you have nothing to worry about in this area, but the odds are that if you have not provided your employees with an approved business-grade solution, you have multiple people using multiple file sync solutions that you don’t even know about. Here’s why that’s a problem:

  1. Data theft - Most of the problems with CGFS solutions emanate from a lack of oversight. Business owners are not privy to when an instance is installed, and are unable to control which employee devices can or cannot sync with a corporate PC. Use of CFGS solutions can open the door to company data being synced (without approval) across personal devices. These personal devices, which accompany employees on public transit, at coffee shops, and with friends, exponentially increase the chance of data being stolen or shared with the wrong parties.
  2. Data loss - Lacking visibility over the movement of files or file versions across end-points, CFGS solutions improperly backup (or do not backup at all) files that were modified on an employee device. If an end-point is compromised or lost, this lack of visibility can result in the inability to restore the most current version of a file…or any version for that matter.
  3. Corrupted data - In a study by CERN, silent data corruption was observed in 1 out of every 1500 files. While many businesses trust their cloud solution providers to make sure that stored data maintains its integrity year after year, most CGFS solutions don’t implement data integrity assurance systems to ensure that any bit-rot or corrupted data is replaced with a redundant copy of the original.
  4. Lawsuits - CGFS solutions give carte blanche power to end-users over the ability to permanently delete and share files. This can result in the permanent loss of critical business documents as well as the sharing of confidential information that can break privacy agreements in place with clients and third-parties.
  5. Compliance violations - Since CGFS solutions have loose (or non-existent) file retention and file access controls, you could be setting yourself up for a compliance violation. Many compliance policies require that files be held for a specific duration and only be accessed by certain people; in these cases, it is imperative to employ strict controls over how long files are kept and who can access them.
  6. Loss of accountability - Without detailed reports and alerts over system-level activity, CGFS solutions can result in loss of accountability over changes to user accounts, organizations, passwords, and other entities. If a malicious admin gains access to the system, hundreds of hours of configuration time can be undone if no alerting system is in place to notify other admins of these changes.
  7. Loss of file access - Consumer-grade solutions don’t track which users and machines touched a file and at which times. This can be a big problem if you’re trying to determine the events leading up to a file’s creation, modification, or deletion. Additionally, many solutions track and associate a small set of file events which can result in a broken access trail if a file is renamed, for example.

Consumer-grade file sync solutions pose many challenges to businesses that care about control and visibility over company data. Allowing employees to utilize CFGS solutions can lead to massive data leaks and security breaches.

Many companies have formal policies or discourage employees from using their own accounts. But while blacklisting common CFGS solutions may curtail the security risks in the short term, employees will ultimately find ways to get around company firewalls and restrictive policies that they feel interfere with their productivity.

The best way for business to handle this is to deploy a company-approved application that will allow IT to control the data, yet grants employees the access and functionality they feel they need to be productive.