Category Archives: Backup And Recovery

What’s New in DataCore SANsymphony-V 10 PSP2

A few weeks ago, DataCore released Product Service Pack 2 (PSP2) for SANsymphony-V 10. As you know if you’ve followed this blog, DataCore is a leader in Software Defined Storage…in fact, DataCore was doing Software Defined Storage before it was called Software Defined Storage. PSP2 brings some great enhancements to the product in the areas of Cloud Integration, Platform Services, and Performance management:

  • OpenStack Integration - SANsymphony-V now understands the “Cinder” protocol used by OpenStack to provision storage from the OpenStack Horizon administrative interface. That means that from a single administrative interface, you can create a new VM on your OpenStack infrastructure, and provision and attach DataCore storage to that VM. But that’s not all - remember, SANsymphony-V can manage more than just the local storage on the DataCore nodes themselves. Because SANsymphony-V runs on a Windows Server OS, it can manage any storage that you can present to that Windows Server. Trying to figure out how to integrate your legacy SAN storage with your OpenStack deployment? Put SANsymphony-V in front of it, and let DataCore talk to OpenStack and provision the storage!
  • Random Write Accelerator - A heavy transactional workload that generates a lot of random write operations can be problematic for magnetic storage, because you have to wait while the disk heads are moved to the right track, then wait some more while the disk rotates to bring the right block under the head. So for truly random writes, the average latency will be the time it takes to move the heads halfway across the platters plus the time it takes the platters to make half a rotation. One of the benefits of SSDs, of course, is that you don’t have to wait for those things, because there are no spinning platters or heads to move. But SSDs are still pretty darned expensive compared to magnetic disks. By enabling random write acceleration on a volume, write requests will be fulfilled immediately by simply writing the data at the current (or nearest available) head/platter position, then having a “garbage collection” process that goes back later when things are not so busy and deletes the “dirty” blocks of data at the old locations. This can deliver SSD-like speed from spinning magnetic disks, and the only cost is that storage consumption will be somewhat increased by the old data that the garbage collection process hasn’t gotten around to yet.
  • Flash Memory Optimizations - Improvements have been made in how cache reads from PCIe flash cards in order to better utilize what can be a very costly resource.
  • Deduplication and Compression have been added as options that you can enable at the storage pool level.
  • Veeam Backup Integration - When you use Veeam to backup a vSphere environment, as many organizations do, the Veeam software typically triggers vSphere snapshots, which are retained for however long it takes to back up the VM in question. This adds to the load on the VMware hosts, and can slow down critical applications. With DataCore’s Veeam integration, DataCore snapshots are taken at the SAN level and used for the backup operation instead of using VMware snapshots.
  • VDI Services - DataCore has added specific support for highly available, stateful VDI deployments across clustered Hyper-V server pairs.
  • Centralized console for multiple groups - If you have multiple SANsymphony-V server groups distributed among, e.g., multiple branch offices, you no longer have to explicitly connect to each group in turn to manage it. All server groups can be integrated into the same management UI, with delegated, role-based adminstration to multiple individuals.
  • Expanded Instrumentation - SANsymphony-V now has tighter integration with S.M.A.R.T. alerts generated by the underlying physical storage to allow disk problems to be addressed before they become serious.

For more details on PSP2 and the features listed above, you can view the following 38-minute recording of a recent Webinar we presented on the subject:

Where Did My Document Go?

It is axiomatic that many of us (perhaps most of us) don’t worry about backing up our PCs until we have a hard drive crash and lose valuable information. This is typically more of a problem with personal PCs than it is with business systems, because businesses usually go to great lengths to make sure that critical data is being backed up. (You are doing that, right? RIGHT? Of course you are. And, of course, you also have a plan for getting a copy of your most critical business data out of your office to a secure off-site location for disaster recovery purposes. Enough said about that.)

So, with business systems, the biggest challenge is making sure that users are saving files to the right place, so the backup routines can back up the file. If users are saving things to their “My Documents” folder, and you’re not redirecting “My Documents” to a network folder on a server, you’ve got a big potential problem brewing. Ditto if people are saving things to their Windows Desktop, which is possibly the worst place to save things that you care about keeping.

But there’s an even more fundamental thing to remember, and to communicate to our users: The best, most comprehensive backup strategy in the world won’t save you if you forget to save your work in the first place! Even in our Hosted Private Cloud environment, where we go to great lengths to back up your data and replicate it between geo-redundant data centers, there’s not much we can do if you don’t save it.

Just as many of us have learned a painful lesson about backing up our data by having lost it, many of us have also had that sinking feeling of accidentally closing a document without saving it, or having the PC shut down due to a power interruption, and realizing that we just lost hours of work.

Microsoft has built an Autorecovery option into the Office apps in an attempt to save us from ourselves. Within, say, Word, go to “File / Options / Save,” and you should see this:

That’s where you set how often your working document will be automatically saved, as well as the location. But be aware that Autorecovery works really well…until it doesn’t. A Google search on the string “Word autorecovery didn’t save” returned roughly 21,000 results. That doesn’t mean you shouldn’t leverage Autorecovery - you certainly should. But take a look at the Word “Help” entry on Autorecovery:

Notice the text that I’ve circled in red? It says “IMPORTANT The Save button is still your best friend. To be sure you don’t lose your latest work, click Save (or press Ctrl+S) often.” Bottom line: Autorecovery may save your backside at some point…or it may not. And corporate backup routines certainly won’t rescue you if you don’t save your work. So save early and often.

And if you’re a mobile user who frequently works while disconnected from the corporate network, it’s a good idea to save your files in multiple locations. Both Microsoft (OneDrive) and Google (Google Drive) will give you 15 Gb of free on-line storage. And if it’s too much trouble to remember to manually save (or copy) your files to more than one location, there are a variety of ways - including VirtualQube’s “follow-me data” service - to set up a folder on your PC or laptop that automatically synchronizes with a folder in the cloud whenever you’re connected to the Internet. You just have to remember to save things to that folder.

You just have to remember to save things, period. Did we mention saving your work early and often? Yeah. Save early and often. It’s the best habit you can develop to protect yourself against data loss.

High Availability and Fault Tolerance Part Two

In my last post on High Availability and Fault Tolerant servers (HA/FT) we talked a little bit about redundant power, meaning you have more than one source of electricity to run your servers. But there are numerous other internal threats that can cause unplanned server outages.

After backup power the next level of redundancy comes in your servers themselves. Most server class machines have numerous redundant components built right in such as hard drives and power supplies. This means that right off the shelf, these systems have some level of Fault Tolerance (FT) built in. This can keep application and data available when a component fails. However there are still numerous threats that can cause unplanned outages. This happens when non-redundant components fail, or when multiple components fail.

Remember that High Availability means that if a virtual or physical machine goes down, it will automatically restart and come back online. Fault Tolerance means that multiple components can fail with no loss of data and no interruption of application availability.

To take HA/FT to a higher level we can turn to one of several products available on the market. Products from companies like Vision Solutions (Double Take) can provide software that allows you to create a stand-by server. More sophisticated products from VMware and Stratus allow you to mirror applications and data on identical servers using a concept known as lock-step. Lock-step means that applications and data are being processed in real time across two hosts. With these products multiple components or an entire server can fail and your applications continue to be available to users.

With Double Take Software from Vision Solutions, IT staff can create a primary and standby server pair that replicates all of your data to a stand by server in real time. This is a sufficient solution for most small to medium enterprises. However, if the primary server fails, there is still a brief interruption in application availability while the failover to the standby server occurs. In special situations that require the highest levels of High Availability and Fault Tolerance we turn to solutions from VMware or Stratus. This provides a scenario where multiple components can fail on multiple servers and your application will continue to run.

Determining which approach is right for you is really an economic decision based on the cost of downtime. If you can’t put a dollar value on what it costs your business per hour or per day when a critical application is unavailable, then that application probably isn’t sufficiently critical for you to spend a lot of money on an HA/FT solution. If you do know what that cost is, then, just like buying any other kind of business insurance, you can make a business decision as to how much money you can justify spending to protect against that risk of loss.

How Do You Back Up Your Cloud Services?

I recently came across a post on spiceworks.com that, although it’s a couple of years old, makes a great point: “IT professionals would never run on-premise systems without adequate backup and recovery capabilities, so it’s hard to imagine why so many pros adopt cloud solutions without ensuring the same level of protection.”

This is not a trivial issue. According to some articles I’ve read, over 100,000 companies are now using Salesforce.com as their CRM system. Microsoft doesn’t reveal how many Office 365 subscribers they have, but they do reveal their annual revenue run-rate. If you make some basic assumptions about the average monthly fee, you can make an educated guess as to how many subscribers they have, and most estimates place it at over 16 million (users, not companies). Google Apps subscriptions are also somewhere in the millions (they don’t reveal their specific numbers either). If your organization subscribes to one or more of these services, have you thought about backing up that data? Or are you just trusting your cloud service provider to do it for you?

Let’s take Salesforce.com as a specific example. Deleted records normally go into a recycle bin, and are retained and recoverable for 15 days. But there are some caveats there:

  • Your recycle bin can only hold a limited number of records. That limit is 25 times the number of megabytes in your storage. (According to the Salesforce.com “help” site, this usually translates to roughly 5,000 records per license.) For example, if you have 500 Mb of storage, your record limit is 12,500 records. If that limit is exceeded, the oldest records in the recycle bin get deleted, provided they’ve been there for at least two hours.
  • If a “child” record – like a contact or an opportunity – is deleted, and its parent record is subsequently deleted, the child record is permanently deleted and is not recoverable.
  • If the recycle bin has been explicitly purged (which requires “Modify All Data” permissions), you may still be able to get them back using the DataLoader tool, but the window of time is very brief. Specifically how long you have is not well documented, but research indicates it’s around 24 – 48 hours.

A quick Internet search will turn up horror stories of organizations where a disgruntled employee deleted a large number of records, then purged the recycle bin before walking out the door. If this happens to you on a Friday afternoon, it’s likely that by Monday morning your only option will be to contact Salesforce.com to request their help in recovering your data. The Salesforce.com help site mentions that this help is available, and notes that there is a “fee associated” with it. It doesn’t mention that the fee starts at $10,000.

You can, of course, periodically export all of your Salesforce.com data as a (very large) .CSV file. Restoring a particular record or group of records will then involve deleting everything in the .CSV file except the records you want to restore, and then importing them back into Salesforce.com. If that sounds painful to you, you’re right.

The other alternative is to use a third-party backup service, of which there are several, to back up your Salesforce.com data. There are several advantages to using a third-party tool: backups can be scheduled and automated, it’s easier to search for the specific record(s) you want to restore, and you can roll back to any one of multiple restore points. One such tool is Cloudfinder, which was recently acquired by eFolder. Cloudfinder will backup data from Salesforce.com, Office 365, Google Apps, and Box. I expect that list of supported cloud services to grow now that they’re owned by eFolder.

We at VirtualQube are excited about this acquisition because we are an eFolder partner, which means that we are now a Cloudfinder partner as well. For more information on Cloudfinder, or any eFolder product, contact sales@virtualqube.com, or just click the “Request a Quote” button on this page.

Scott’s Book Arrived!

We are pleased to announce that Scott’s books have arrived! ‘The Business Owner’s Essential Guide to I.T.’ is 217 pages packed full of pertinent information.

For those of you who pre-purchased your books, Thank You! Your books have already been signed and shipped, you should receive them shortly and we hope you enjoy them as much as Scott enjoyed writing for you.

If you haven’t purchased your copy, click here, purchase a signed copy from us and all proceeds will be donated to the WA chapter of Mothers Against Drunk Driving (MADD).