Category Archives: Storage

What’s New in DataCore SANsymphony-V 10 PSP2

A few weeks ago, DataCore released Product Service Pack 2 (PSP2) for SANsymphony-V 10. As you know if you’ve followed this blog, DataCore is a leader in Software Defined Storage…in fact, DataCore was doing Software Defined Storage before it was called Software Defined Storage. PSP2 brings some great enhancements to the product in the areas of Cloud Integration, Platform Services, and Performance management:

  • OpenStack Integration - SANsymphony-V now understands the “Cinder” protocol used by OpenStack to provision storage from the OpenStack Horizon administrative interface. That means that from a single administrative interface, you can create a new VM on your OpenStack infrastructure, and provision and attach DataCore storage to that VM. But that’s not all - remember, SANsymphony-V can manage more than just the local storage on the DataCore nodes themselves. Because SANsymphony-V runs on a Windows Server OS, it can manage any storage that you can present to that Windows Server. Trying to figure out how to integrate your legacy SAN storage with your OpenStack deployment? Put SANsymphony-V in front of it, and let DataCore talk to OpenStack and provision the storage!
  • Random Write Accelerator - A heavy transactional workload that generates a lot of random write operations can be problematic for magnetic storage, because you have to wait while the disk heads are moved to the right track, then wait some more while the disk rotates to bring the right block under the head. So for truly random writes, the average latency will be the time it takes to move the heads halfway across the platters plus the time it takes the platters to make half a rotation. One of the benefits of SSDs, of course, is that you don’t have to wait for those things, because there are no spinning platters or heads to move. But SSDs are still pretty darned expensive compared to magnetic disks. By enabling random write acceleration on a volume, write requests will be fulfilled immediately by simply writing the data at the current (or nearest available) head/platter position, then having a “garbage collection” process that goes back later when things are not so busy and deletes the “dirty” blocks of data at the old locations. This can deliver SSD-like speed from spinning magnetic disks, and the only cost is that storage consumption will be somewhat increased by the old data that the garbage collection process hasn’t gotten around to yet.
  • Flash Memory Optimizations - Improvements have been made in how cache reads from PCIe flash cards in order to better utilize what can be a very costly resource.
  • Deduplication and Compression have been added as options that you can enable at the storage pool level.
  • Veeam Backup Integration - When you use Veeam to backup a vSphere environment, as many organizations do, the Veeam software typically triggers vSphere snapshots, which are retained for however long it takes to back up the VM in question. This adds to the load on the VMware hosts, and can slow down critical applications. With DataCore’s Veeam integration, DataCore snapshots are taken at the SAN level and used for the backup operation instead of using VMware snapshots.
  • VDI Services - DataCore has added specific support for highly available, stateful VDI deployments across clustered Hyper-V server pairs.
  • Centralized console for multiple groups - If you have multiple SANsymphony-V server groups distributed among, e.g., multiple branch offices, you no longer have to explicitly connect to each group in turn to manage it. All server groups can be integrated into the same management UI, with delegated, role-based adminstration to multiple individuals.
  • Expanded Instrumentation - SANsymphony-V now has tighter integration with S.M.A.R.T. alerts generated by the underlying physical storage to allow disk problems to be addressed before they become serious.

For more details on PSP2 and the features listed above, you can view the following 38-minute recording of a recent Webinar we presented on the subject:

Seven Security Risks from Consumer-Grade File Sync Services

[The following is courtesy of Anchor - an eFolder company and a VirtualQube partner.]

Consumer-grade file sync solutions (referred to hereafter as “CGFS solutions” to conserve electrons) pose many challenges to businesses that care about control and visibility over company data. You may think that you have nothing to worry about in this area, but the odds are that if you have not provided your employees with an approved business-grade solution, you have multiple people using multiple file sync solutions that you don’t even know about. Here’s why that’s a problem:

  1. Data theft - Most of the problems with CGFS solutions emanate from a lack of oversight. Business owners are not privy to when an instance is installed, and are unable to control which employee devices can or cannot sync with a corporate PC. Use of CFGS solutions can open the door to company data being synced (without approval) across personal devices. These personal devices, which accompany employees on public transit, at coffee shops, and with friends, exponentially increase the chance of data being stolen or shared with the wrong parties.
  2. Data loss - Lacking visibility over the movement of files or file versions across end-points, CFGS solutions improperly backup (or do not backup at all) files that were modified on an employee device. If an end-point is compromised or lost, this lack of visibility can result in the inability to restore the most current version of a file…or any version for that matter.
  3. Corrupted data - In a study by CERN, silent data corruption was observed in 1 out of every 1500 files. While many businesses trust their cloud solution providers to make sure that stored data maintains its integrity year after year, most CGFS solutions don’t implement data integrity assurance systems to ensure that any bit-rot or corrupted data is replaced with a redundant copy of the original.
  4. Lawsuits - CGFS solutions give carte blanche power to end-users over the ability to permanently delete and share files. This can result in the permanent loss of critical business documents as well as the sharing of confidential information that can break privacy agreements in place with clients and third-parties.
  5. Compliance violations - Since CGFS solutions have loose (or non-existent) file retention and file access controls, you could be setting yourself up for a compliance violation. Many compliance policies require that files be held for a specific duration and only be accessed by certain people; in these cases, it is imperative to employ strict controls over how long files are kept and who can access them.
  6. Loss of accountability - Without detailed reports and alerts over system-level activity, CGFS solutions can result in loss of accountability over changes to user accounts, organizations, passwords, and other entities. If a malicious admin gains access to the system, hundreds of hours of configuration time can be undone if no alerting system is in place to notify other admins of these changes.
  7. Loss of file access - Consumer-grade solutions don’t track which users and machines touched a file and at which times. This can be a big problem if you’re trying to determine the events leading up to a file’s creation, modification, or deletion. Additionally, many solutions track and associate a small set of file events which can result in a broken access trail if a file is renamed, for example.

Consumer-grade file sync solutions pose many challenges to businesses that care about control and visibility over company data. Allowing employees to utilize CFGS solutions can lead to massive data leaks and security breaches.

Many companies have formal policies or discourage employees from using their own accounts. But while blacklisting common CFGS solutions may curtail the security risks in the short term, employees will ultimately find ways to get around company firewalls and restrictive policies that they feel interfere with their productivity.

The best way for business to handle this is to deploy a company-approved application that will allow IT to control the data, yet grants employees the access and functionality they feel they need to be productive.

DataCore Releases “State of Software Defined Storage” Report

Last week, we wrote about DataCore’s recent release of v10.0 of their flagship SANsymphony-V software-defined storage product. The features and functionality in the new release were, no doubt, driven in part by the research DataCore has done, as reflected in their recently released fourth annual survey of global IT professionals - conducted to identify current storage challenges and the forces that are driving demand for software-defined storage. Here are some of the highlights from that survey, which can be downloaded in its entirety from the DataCore Web site:

  • 41% of respondents said that a primary factor that impedes organizations from considering different models and manufacturers of storage devices was the plethora of tools required to manage them.
  • 37% of respondents said that the difficulty of migrating between different models and generations of storage devices was also a major impediment.
  • 39% of respondents said that these issues were not a concern for them, because they were utilizing independent storage virtualization software to pool different devices and models of storage devices from different manufacturers and manage them centrally.
  • Despite all the talk about the “all flash data center,” more than half of the respondents (63%) said that they currently have less than 10% of their capacity assigned to flash storage.
  • Nearly 40% of respondents said they were not planning to use flash or SSDs for server virtualization projects because of cost concerns.
  • 23% of respondents ranked performance degradation or the inability to meet performance expectations as the most serious obstacle when virtualizing server workloads; 32% ranked it as somewhat of an obstacle.
  • The highest-ranking reasons that organizations deployed storage virtualization software were the improvement of disaster recovery and business continuity (32%) and the ability to enable storage capacity expansion without disruption (30%).

DataCore is a pioneer and market leader in software-defined storage. Read more about DataCore and VirtualQube at www.VirtualQube.com/DataCore.

Is It Time to Upgrade Your DataCore SANsymphony-V?

A few months ago, DataCore released SANsymphony-V 10.0. If you’re running an earlier version of SANsymphony-V, there are several reasons why you might want to start planning your upgrade. There are some great new features in v10, and we’ll get to those in a moment, but you should also bear in mind that DataCore’s support policy is to support the current full release (v10) and the release just previous to the current full release (v9.x). Support for v8.x officially ends on December 31, 2014, and support for v7.x ended last June.

That doesn’t mean DataCore won’t help you if you have a problem with an earlier version. It does mean that their obligation is limited to “best effort” support, and does not extend to bug fixes, software updates, or root-cause analysis of issues you may run into. So, if you’re on anything earlier than v9.x, you really should talk to us about upgrading.

But even if you’re on v9.x, there are some good reasons why you may want to upgrade to 10.0:

  • Scalability has doubled from 16 to a maximum of 32 nodes.
  • Supports high-speed 40/56 GbE iSCSI, 16 Gb Fibre Channel, and iSCSI target NIC teaming.
  • Performance visualization/heat map tools to give you better insight into the behavior of flash and disk storage tiers.
  • New auto-tiering settings to optimize expensive resources like flash cards.
  • Intelligent disk rebalancing to dynamically redistribute the load across available disks within a storage tier.
  • Automated CPU load leveling and flash optimization.
  • Disk pool optimization and self-healing storage - disk contents are automatically restored across the remaining storage in the pool.
  • New self-tuning caching algorithms and optimizations for flash cards and SSDs.
  • Simple configuration wizards to rapidly set up different use cases.

And if that’s not enough, v10 now allows you to provision high-performance virtual SANs that can scale to more than 50 million IOPS and up to 32 Petabytes of capacity across a cluster of 32 servers. Not sure whether a virtual SAN can deliver the performance you need? They’ll give you a free virtual SAN for non-production evaluation use.

Check out this great overview of software-defined storage virtualization:

The Big Data Challenge from an SMB Perspective

Big data is a hot topic throughout the IT and business industry. I’ve seen many new start-ups trying to capitalize on the needs of firms to store, manage, and derive value from large databases with both structured and unstructured data. High storage needs and even higher processing power are required to play in this game, driving the fastest growing job description to now include some sort of combination of “data” and “scientist”. Source. Even Harvard Business Review has touted this role as “The Sexiest Job of the 21st Century”. Source.

But in the SMB space, big data can mean something very different, with a slightly different set of challenges. Last week at the Exact Macola Evolve 2014 conference, Scott and I met a number of small business owners / managers who faced challenges with their data structures and file sizes. One President of a manufacturing company said to me, “My CFO has this spreadsheet full of pivot tables and graphs, and it’s now 100MB. We cannot e-mail it, and it takes forever to open. I don’t think I can virtualize our workstations if this is what is going on…” Well the actual answer is that yes, that can be virtualized and yes, that file can be re-structured to smaller pieces so that it’s not as much of a burden on the system resources when it needs to be accessed, edited, saved, or sent around. And the real answer is that there is a way out of this predicament. By and large, this is what Big Data actually looks like to the SMB.

Moving large files is not exclusive to the SMB, but the infrastructure to allow for ease of transfer isn’t generally there. To escape the clutches of e-mail transfers, some SMBs look to inexpensive storage/retrieval tools such as ShareFile or DropBox in order to collaborate. While these tools get the job done, some come with access hiccups while others are blatant security risks. And dividing up data into smaller pieces that can be updated consistently is easier said than done, easier planned for than retro-fitted. Alternatively, SMBs can invest in higher performance network hardware (think Ciena), but these have large pricetags associated with them. And although this is a generalization, the percentage of SMBs of the total number of businesses in the area increase as you move further and further out of major metropolitan areas. Another complicating factor as you move towards more rural areas is lower speed internet connectivity. Still, the SMBs face these “big data” challenges and they can consume many resources in determining how to deal with them.

Over the years, VirtualQube has learned a thing or two about how to deal with these challenges. Here are some helpful tips when it comes to managing your SMB information:

  1. Use a collaboration tool to keep large files OUT of e-mail / inboxes, insuring the tool does not increase the security risks noted earlier
  2. Have reference files as read-only and store them in a protected area to maintain integrity (e.g. a protected network file server)
  3. Create separate files for each type of analysis
  4. Create graphs in separate files as they are graphically intense and the program will have to re-calculate and render on every edit, or every open

In a future article, I will write about the impacts of business intelligence being stored in employee Inboxes, instead of the tools designed to store and harness information. They are significant, and are felt across organizations of all sizes, so stay tuned.