Category Archives: Best Practices

The Case for Office 365

Update - May 7, 2015
In the original post below, we talked about the 20,000 “item” limit in OneDrive for Business. It turns out that even our old friend and Office 365 evangelist Harry Brelsford, founder of SMB Nation, and, more recently, O365 Nation, has now run afoul of this obstacle, as he describes in his blog post from May 5.

Turns out there’s another quirk with OneDrive for Business that Harry didn’t touch on in his blog (nor did we in our original post below) - OneDrive for Business is really just a front end for a Microsoft hosted SharePoint server. “So what?” you say. Well, it turns out that there are several characters that are perfectly acceptable for you to use in a Windows file or folder name that are not acceptable in a file or folder name on a SharePoint server. (For the definitive list of what’s not acceptable, see https://support.microsoft.com/en-us/kb/905231.) And if you’re trying to sync thousands of files with your OneDrive for Business account and a few of them have illegal characters in their names, the sync operation will fail and you will get to play the “find-the-file-with-the-illegal-file-name” game, which can provide you with hours of fun…

Original Post Follows
A year ago, in a blog post targeted at prospective hosting providers, we said, “…in our opinion, selling Office 365 to your customers is not a cloud strategy. Office 365 may be a great fit for customers, but it still assumes that most computing will be done on a PC (or laptop) at the client endpoint, and your customer will still, in most cases, have at least one server to manage, backup, and repair when it breaks.”

About the same time, we wrote about the concept of “Data Gravity” - that, just as objects with physical mass exhibit inertia and attract one another in accordance with the law of gravity, large chunks of data also exhibit a kind of inertia and tend to attract other related data and the applications required to manipulate that data. This is due in part to the fact that (according to former Microsoft researcher Jim Gray) the most expensive part of computing is the cost of moving data around. It therefore makes sense that you should be running your applications wherever your data resides: if your data is in the Cloud, it can be argued that you should be running your applications there as well – especially apps that frequently have to access a shared set of back-end data.

Although these are still valid points, they do not imply that Office 365 can’t bring significant value to organizations of all sizes. There is a case to be made for Office 365, so let’s take a closer look at it:

First, Office 365 is, in most cases, the most cost-effective way to license the Office applications, especially if you have fewer than 300 users (which is the cut-off point between the “Business” and “Enterprise” O365 license plans). Consider that a volume license for Office 2013 Pro Plus without Software Assurance under the “Open Business” license plan costs roughly $500. The Office 365 Business plan – which gets you just the Office apps without the on-line services – costs $8.25/month. If you do the math, you’ll see that $500 would cover the subscription cost for five years.

But wait – that’s really not an apples-to-apples comparison, because with O365 you always have access to the latest version of Office. So we should really be comparing the O365 subscription cost to the volume license price of Office with Software Assurance, which, under the Open Business plan, is roughly $800 for the initial purchase, which includes two years of S.A., and $295 every two years after that to keep the S.A. in place. Total four-year cost under Open Business: $1,095. Total four-cost under the Office 365 Business plan: $396. Heck, even the Enterprise E3 plan (at $20/month) is only $960 over four years.

But (at the risk of sounding like a late-night cable TV commercial) that’s still not all! Office 365 allows each user to install the Office applications on up to five different PCs or Macs and up to five tablets and five smart phones. This is the closest Microsoft has ever come to per-user licensing for desktop applications, and in our increasingly mobile world where nearly everyone has multiple client devices, it’s an extremely attractive license model.

Second, at a price point that is still less than comparable volume licensing over a four-year period, you can also get Microsoft Hosted Exchange, Hosted SharePoint, OneDrive for Business, Hosted Lync for secure instant messaging and Web conferencing, and (depending on the plan) unlimited email archiving and eDiscovery tools such as the ability to put users and/or SharePoint document libraries on discovery hold and conduct global searches across your entire organization for relevant Exchange, Lync, and SharePoint data. This can make the value proposition even more compelling.

So what’s not to like?

Well, for one thing, email retention in Office 365 is not easy and intuitive. As we discussed in our recent blog series on eDiscovery, when an Outlook user empties the Deleted Items folder, or deletes a single item from it, or uses Shift+Delete on an item in another folder (which bypasses the Deleted Items folder), that item gets moved to the “Deletions” subfolder in a hidden “Recoverable Items” folder on the Exchange server. As the blog series explains, these items can still be retrieved by the user as long as they haven’t been purged. By default, they will be purged after two weeks. Microsoft’s Hosted Exchange service allows you to extend that period (the “Deleted Items Retention Period”), but only to a maximum of 30 days – whereas if you are running your own Exchange server, you can extend the period to several years.

But the same tools that allow a user to retrieve items from the Deletions subfolder will also allow a user to permanently purge items from that subfolder. And once an item is purged from the Deletions subfolder – whether explicitly by the user or by the expiration of the Deleted Items Retention Period – that item is gone forever. The only way to prevent this from happening is to put the user on Discovery Hold (assuming you’ve subscribed to a plan which allows you to put users on Discovery Hold), and, unfortunately, there is currently no way to do a bulk operation in O365 to put multiple users on Discovery Hold – you must laboriously do it one user at a time. And if you forget to do it when you create a new user, you run the risk of having that user’s email messages permanently deleted (whether accidentally or deliberately) with no ability to recover them if, Heaven forbid, you ever find yourself embroiled in an eDiscovery action.

One way around this is to couple your Office 365 plan with a third-party archiving tool, such as Mimecast. Although this obviously adds expense, it also adds another layer of malware filtering, an unlimited archive that the user cannot alter, a search function that integrates gracefully into Outlook, and an email continuity function that allows you to send/receive email directly via a Mimecast Web interface if the Office 365 Hosted Exchange service is ever unavailable. You can also use a tool like eFolder’s CloudFinder to back up your entire suite of Office 365 data – documents as well as email messages.

And then there’s OneDrive. You might be able, with a whole lot of business process re-engineering, to figure out how to move all of your file storage into Office 365′s Hosted SharePoint offering. Of course, there would then be no way to access those files unless you’re on-line. Hence the explosive growth in the business-class cloud file synchronization market - where you have a local folder (or multiple local folders) that automatically synchronizes with a cloud file repository, giving you the ability to work off-line and, provided you’ve saved your files in the right folder, synchronize those files to the cloud repository the next time you connect to the Internet. Microsoft’s entry in this field is OneDrive for Business…but there is a rather serious limitation in OneDrive for Business as it exists today.

O365′s 1 Tb of Cloud Storage per user sounds like more than you would ever need. But what you may not know is that there is a limit of 20,000 “items” per user (both a folder and a file within that folder are “items”). You’d be surprised at how fast you can reach that limit. For example, there are three folders on my laptop where all of my important work-related files are stored. One of those folders contains files that also need to be accessible by several other people in the organization. The aggregate storage consumed by those three folders is only about 5 Gb – but there are 18,333 files and subfolders in those three folders. If I was trying to use OneDrive for Business to synchronize all those files to the Cloud, I would probably be less than six months away from exceeding the 20,000 item limit.

Could I go through those folders and delete a lot of stuff I no longer need, or archive them off to, say, a USB drive? Sure I could – and I try to do that periodically. I dare say that you probably also have a lot of files hanging around on your systems that you no longer need. But it takes time to do that grooming – and what’s the most precious resource that most of us never have enough of? Yep, time. My solution is to use Citrix ShareFile to synchronize all three of those folders to a Cloud repository. We also offer Anchor Works (now owned by eFolder) for business-class Cloud file synchronization. (And there are good reasons why you might choose one over the other, but they’re beyond the scope of this article.)

The bottom line is that, while Office 365 still may not be a complete solution that will let you move your business entirely to the cloud and get out of the business of supporting on-prem servers, it can be a valuable component of a complete solution. As with so many things in IT, there is not necessarily a single “right” way to do anything. There are multiple approaches, each with pros and cons, and the challenge is to select the right combination of services for a particular business need. We believe that part of the value we can bring to the table is to help our clients select that right combination of services – whether it be a VirtualQube hosted private cloud, a private cloud on your own premise, in your own co-lo, or in a public infrastructure such as Amazon or Azure, or a public/private hybrid cloud deployment – and to help our clients determine whether one of the Office 365 plans should be part of that solution. And if you use the Office Suite at all, the answer to that is probably “yes” - it’s just a matter of which plan to choose.

Windows Server 2003 - Four Months and Counting

Unless you’ve been living in a cave in the mountains for the last several months, you’re probably aware that Windows Server 2003 hits End of Life on July 14, 2015 – roughly four months from now. That means Microsoft will no longer develop or release security patches or fixes for the OS. You will no longer be able to call Microsoft for support if you have a problem with your 2003 server. Yet, astoundingly, only a few weeks ago Microsoft was estimating that there were still over 8 million 2003 servers in production.

Are some of them yours? If so, consider this: As Mike Boyle pointed out in his blog last October, you’re running a server OS that was released the year Facebook creator Mark Zuckerberg entered college; the year Wikipedia was launched; the year Myspace (remember them?) was founded; the year the Tampa Bay Buccaneers won the Super Bowl. Yes, it was that long ago.

Do you have to deal with HIPAA or PCI compliance? What would it mean to your organization if you didn’t pass your next audit? Because you probably won’t if you’re still running 2003 servers. And even if HIPAA or PCI aren’t an issue, what happens when (not if) the next big vulnerabilty is discovered and you have no way to patch for it?

Yes, I am trying to scare you – because this really is serious stuff, and if you don’t have a migration plan yet, you don’t have much time to assemble one. Please, let’s not allow this to become another “you can have it when you pry it from my cold dead hands” scenario like Windows XP. There really is too much at stake here. You can upgrade. You can move to the cloud. Or you can put your business as risk. It’s your call.

Seven Security Risks from Consumer-Grade File Sync Services

[The following is courtesy of Anchor - an eFolder company and a VirtualQube partner.]

Consumer-grade file sync solutions (referred to hereafter as “CGFS solutions” to conserve electrons) pose many challenges to businesses that care about control and visibility over company data. You may think that you have nothing to worry about in this area, but the odds are that if you have not provided your employees with an approved business-grade solution, you have multiple people using multiple file sync solutions that you don’t even know about. Here’s why that’s a problem:

  1. Data theft - Most of the problems with CGFS solutions emanate from a lack of oversight. Business owners are not privy to when an instance is installed, and are unable to control which employee devices can or cannot sync with a corporate PC. Use of CFGS solutions can open the door to company data being synced (without approval) across personal devices. These personal devices, which accompany employees on public transit, at coffee shops, and with friends, exponentially increase the chance of data being stolen or shared with the wrong parties.
  2. Data loss - Lacking visibility over the movement of files or file versions across end-points, CFGS solutions improperly backup (or do not backup at all) files that were modified on an employee device. If an end-point is compromised or lost, this lack of visibility can result in the inability to restore the most current version of a file…or any version for that matter.
  3. Corrupted data - In a study by CERN, silent data corruption was observed in 1 out of every 1500 files. While many businesses trust their cloud solution providers to make sure that stored data maintains its integrity year after year, most CGFS solutions don’t implement data integrity assurance systems to ensure that any bit-rot or corrupted data is replaced with a redundant copy of the original.
  4. Lawsuits - CGFS solutions give carte blanche power to end-users over the ability to permanently delete and share files. This can result in the permanent loss of critical business documents as well as the sharing of confidential information that can break privacy agreements in place with clients and third-parties.
  5. Compliance violations - Since CGFS solutions have loose (or non-existent) file retention and file access controls, you could be setting yourself up for a compliance violation. Many compliance policies require that files be held for a specific duration and only be accessed by certain people; in these cases, it is imperative to employ strict controls over how long files are kept and who can access them.
  6. Loss of accountability - Without detailed reports and alerts over system-level activity, CGFS solutions can result in loss of accountability over changes to user accounts, organizations, passwords, and other entities. If a malicious admin gains access to the system, hundreds of hours of configuration time can be undone if no alerting system is in place to notify other admins of these changes.
  7. Loss of file access - Consumer-grade solutions don’t track which users and machines touched a file and at which times. This can be a big problem if you’re trying to determine the events leading up to a file’s creation, modification, or deletion. Additionally, many solutions track and associate a small set of file events which can result in a broken access trail if a file is renamed, for example.

Consumer-grade file sync solutions pose many challenges to businesses that care about control and visibility over company data. Allowing employees to utilize CFGS solutions can lead to massive data leaks and security breaches.

Many companies have formal policies or discourage employees from using their own accounts. But while blacklisting common CFGS solutions may curtail the security risks in the short term, employees will ultimately find ways to get around company firewalls and restrictive policies that they feel interfere with their productivity.

The best way for business to handle this is to deploy a company-approved application that will allow IT to control the data, yet grants employees the access and functionality they feel they need to be productive.

The Great Superfishing Expedition of 2015

In a move that will probably end up in the top ten technology blunders of the year, Lenovo decided, starting in September 2014, to pre-install Superfish VisualDiscovery software on some of their PCs. (Fortunately for most of the readers of this blog, it appears that it was primarily the consumer products that were affected, not the business products.) The “visual search” concept behind Superfish is interesting - the intent is that a user could hover over a picture in their browser, and Superfish would pop up links to shopping sites that sell the item in the picture. I could see where that would be some pretty cool functionality…if the user wanted that functionality, if the user intentionally installed the software, and if the user could easily turn the functionality on and off as desired. But that’s not what happened - and here’s why it’s a big problem.

In order to perform this function when a user has an SSL-encrypted connection to a Web site, Superfish has to insert itself into the middle of that encrypted connection. It has to intercept the data coming from the shopping site, decrypt it, and then re-encrypt it before sending it on to the browser. Security geeks have a term for this - it’s called a “man-in-the-middle attack,” and it’s not something you want to willingly allow on your PC. In order to do this, Superfish installs a self-signed trusted root certificate on the PC. That means Superfish has the same level of trust as, say, the VeriSign trusted root certificate that Microsoft bakes into your Operating System so you can safely interact with all the Web sites out there that have VeriSign certificates on them…for example, your banking institution, as most financial institutions I’ve seen use VeriSign certificates on their Web banking sites. (Are you frightened yet?)

But that’s not all. Superfish installs the same root certificate on every PC that it gets installed on. And it turns out that it’s not technically difficult to recover the private encryption key from the Superfish software. That means that an attacker could generate an SSL certificate for any Web site that would be trusted by any system that has the Superfish software installed. In other words, you could be lured to a Web site that impersonated your bank, or a favorite shopping site, and you would get no security warning from your browser. You try to authenticate, and now the bad guys have your user credentials. (How about now?)

Hopefully, you’re at least frightened enough to check to see if your system was one of the ones that Lenovo shipped with Superfish pre-installed. You can find that list at http://news.lenovo.com/article_display.cfm?article_id=1929. Again, it appears that the majority of the Lenovo systems on the list were consumer models, not business models. If you are one of the unlucky ones, you can find an uninstall tool at http://support.lenovo.com/us/en/product_security/superfish_uninstall

You should also note that security experts are divided as to whether simply running uninstall tools and deleting the root certificate are sufficient. Some have recommended a new, clean installation of Windows as the safest thing to do. Unfortunately, this may require you to purchase a new copy of Windows if you don’t have one lying around…as just re-installing from whatever recovery media may have come with your new PC will probably also re-install Superfish.

Meanwhile, Lenovo has stopped pre-installing Superfish, and is doing its best to control the damage to its brand. We wish them the best of luck with that - from what we’ve seen, they make some great products…and at least one really bad decision…

Choosing the Right IT Provider

A few months ago, we wrote about how business leaders could determine when it was time to use an outside IT vendor. (See “When Should an IT Leader Use a Vendor, Part 1” and Part 2.) Once the decision has been made to seek outside help, the logical next question is how to choose the right IT vendor. Before you begin that selection process, you need to assess your organization’s needs:

  • Do you have an in-house IT staff and just need a consultant for specialty work? Or do you need to outsource a broader spectrum of services, such as comprehensive help desk support, fixed fee monitoring and support services for your workstations and/or servers, and consulting services to help you establish future technology direction? A consultant may have different pricing approaches for different types of IT projects, while the broader spectrum of services is probably best handled via a fixed-fee monthly support contract.
  • What, exactly, are you looking for? Do you need a single project completed? Are you looking for design services, deployment services, post-deployment support, or some combination of the three? Do you want your vendor to provide a complete package consisting of hardware, software, and services, or only part of the solution? Will the project be built on premise, or do you want to go to the Cloud? IT providers frequently specialize in different aspects of the IT world, so make sure you have a talk with any company you are considering to determine if they can fulfill all of your needs, or if you will need multiple providers to achieve your end goal.

After you’ve determined your needs, you will want to identify IT providers that offer the services that you need. Some providers are very specialized, and others have boad offerings. You will want to do your due diligence by checking out the provider’s own Web site as well as supporting sites such as LinkedIn, Facebook, Twitter, etc. But don’t stop there - dig deeper and examine their credentials. Look for case studies, testimonials, and references. Ask if you can actually speak to the customers who are profiled in these case studies, testimonials, and references. If you’re looking for a comprehensive support agreement, ask to review the contract to make sure all of your needs are covered and that the proposed Service Level Agreement (“SLA”) meets your requirements. Some of the questions you’ll want to answer are:

  • How qualified is the provider’s staff? Are they certified with the vendors whose products they will be working on in your environment?
  • How big is the provider’s company? Size and reach matter - you don’t want to have a service emergency and discover that the only person who knows how to work on your systems is gone on vacation. On the other hand, if your organization is small, your business may be less important to a very large provider and you may get more attentive service from a smaller one.
  • What geographical areas does the provider cover? This is obviously important if your own organization operates in more than one area, but will also be important if you’re considering a potential move or business expansion.
  • Does the SLA include a guaranteed response time? More importantly, does that guaranteed response time meet the needs of your business? It might be nice to have a one hour guaranteed response time, but shorter guaranteed response times are likely to be more expensive…so if your business really doesn’t need that SLA, why pay for it?
  • If you’re signing a support contract, make sure you clearly understand what services are covered, what is excluded, and what your cost is for items that are excluded from coverage.

Did we miss anything that you have found to be important? Let us know in the comments.