Category Archives: Open Source

What’s New in DataCore SANsymphony-V 10 PSP2

A few weeks ago, DataCore released Product Service Pack 2 (PSP2) for SANsymphony-V 10. As you know if you’ve followed this blog, DataCore is a leader in Software Defined Storage…in fact, DataCore was doing Software Defined Storage before it was called Software Defined Storage. PSP2 brings some great enhancements to the product in the areas of Cloud Integration, Platform Services, and Performance management:

  • OpenStack Integration - SANsymphony-V now understands the “Cinder” protocol used by OpenStack to provision storage from the OpenStack Horizon administrative interface. That means that from a single administrative interface, you can create a new VM on your OpenStack infrastructure, and provision and attach DataCore storage to that VM. But that’s not all - remember, SANsymphony-V can manage more than just the local storage on the DataCore nodes themselves. Because SANsymphony-V runs on a Windows Server OS, it can manage any storage that you can present to that Windows Server. Trying to figure out how to integrate your legacy SAN storage with your OpenStack deployment? Put SANsymphony-V in front of it, and let DataCore talk to OpenStack and provision the storage!
  • Random Write Accelerator - A heavy transactional workload that generates a lot of random write operations can be problematic for magnetic storage, because you have to wait while the disk heads are moved to the right track, then wait some more while the disk rotates to bring the right block under the head. So for truly random writes, the average latency will be the time it takes to move the heads halfway across the platters plus the time it takes the platters to make half a rotation. One of the benefits of SSDs, of course, is that you don’t have to wait for those things, because there are no spinning platters or heads to move. But SSDs are still pretty darned expensive compared to magnetic disks. By enabling random write acceleration on a volume, write requests will be fulfilled immediately by simply writing the data at the current (or nearest available) head/platter position, then having a “garbage collection” process that goes back later when things are not so busy and deletes the “dirty” blocks of data at the old locations. This can deliver SSD-like speed from spinning magnetic disks, and the only cost is that storage consumption will be somewhat increased by the old data that the garbage collection process hasn’t gotten around to yet.
  • Flash Memory Optimizations - Improvements have been made in how cache reads from PCIe flash cards in order to better utilize what can be a very costly resource.
  • Deduplication and Compression have been added as options that you can enable at the storage pool level.
  • Veeam Backup Integration - When you use Veeam to backup a vSphere environment, as many organizations do, the Veeam software typically triggers vSphere snapshots, which are retained for however long it takes to back up the VM in question. This adds to the load on the VMware hosts, and can slow down critical applications. With DataCore’s Veeam integration, DataCore snapshots are taken at the SAN level and used for the backup operation instead of using VMware snapshots.
  • VDI Services - DataCore has added specific support for highly available, stateful VDI deployments across clustered Hyper-V server pairs.
  • Centralized console for multiple groups - If you have multiple SANsymphony-V server groups distributed among, e.g., multiple branch offices, you no longer have to explicitly connect to each group in turn to manage it. All server groups can be integrated into the same management UI, with delegated, role-based adminstration to multiple individuals.
  • Expanded Instrumentation - SANsymphony-V now has tighter integration with S.M.A.R.T. alerts generated by the underlying physical storage to allow disk problems to be addressed before they become serious.

For more details on PSP2 and the features listed above, you can view the following 38-minute recording of a recent Webinar we presented on the subject:

SuperGRUB to the Rescue!

This post requires two major disclaimers:

  1. I am not an engineer. I am a relatively technical sales & marketing guy. I have my own Small Business Server-based network at home, and I know enough about Microsoft Operating Systems to be able to muddle through most of what gets thrown at me. And, although I’ve done my share of friends-and-family-tech-support, you do not want me working on your critical business systems.
  2. I am not, by any stretch of the imagination, a Linux guru. However, I’ve come to appreciate the “LAMP” (Linux/Apache/MySQL/PHP) platform for Web hosting. With apologies to my Microsoft friends, there are some things that are quite easy to do on a LAMP platform that are not easy at all on a Windows Web server. (Just try, for example, to create a file called “.htaccess” on a Windows file system.)

Some months ago, I got my hands on an old Dell PowerEdge SC420. It happened to be a twin of the system I’m running SBS on, but didn’t have quite as much RAM or as much disk space. I decided to install CentOS v5.4 on it, turn it into a LAMP server, and move the four or five Web sites I was running on my Small Business Server over my new LAMP server instead. I even found an open source utility called “ISP Config” that is a reasonable alternative - at least for my limited needs - to the Parallels Plesk control panel that most commercial Web hosts offer.

Things went along swimmingly until last weekend, when I noticed a strange, rhythmic clicking and beeping coming from my Web server. Everything seemed to be working - Web sites were all up - I logged on and didn’t see anything odd in the system log files (aside from the fact that a number of people out there seemed to be trying to use FTP to hack my administrative password). So I decided to restart the system, on the off chance that it would clear whatever error was occurring.

Those of you who are Linux gurus probably just did a double facepalm…because, in retrospect, I should have checked the health of my disk array before shutting down. The server didn’t have a hardware RAID controller, so I had built my system with a software RAID1 array - which several sources suggest is both safer and better performing than the “fake RAID” that’s built into the motherboard. Turns out that the first disk in my array (/dev/sda for those who know the lingo) had died, and for some reason, the system wouldn’t boot from the other drive.

This is the point where I did a double facepalm, and muttered a few choice words under my breath. Not that it was a tragedy - all that server did was host my Web sites, and my Web site data was backed up in a couple of places. So I wouldn’t have lost any data if I had rebuilt the server…just several hours of my life that I didn’t really have to spare. So I did what any of you would have done in my place - I started searching the Web.

The first advice I found suggested that I should completely remove the bad drive from the system, and connect the good drive as drive “0.” Tried it, no change. The next advice I found suggested that I boot my system from the Linux CD or DVD, and try the “Linux rescue” function. That sounded like a good idea, so I tried it - but when the rescue utility examined my disk, it claimed that there were no Linux partitions present, despite evidence to the contrary: I could run fdisk -l and see that there were two Linux partitions on the disk, one of which was marked as a boot partition, but the rescue utility still couldn’t detect them, and the system still wouldn’t boot.

I finally stumbled across a reference to something called “SuperGRUB.” “GRUB,” for those of you who know as much about Linux as I did before this happened to me, is the “GNU GRand Unified Bootloader,” from the GNU Project. It’s apparently the bootloader that CentOS uses, and it was apparently missing from the disk I was trying to boot from. But that’s precisely the problem that SuperGRUB was designed to fix!

And fix it it did! I downloaded the SuperGRUB ISO, burned it to a CD, booted my Linux server from it, navigated through a quite intuitive menu structure, told it what partition I wanted to fix, and PRESTO! My disk was now bootable, and my Web server was back (albeit running on only one disk). But that can be fixed as well. I found a new 80 Gb SATA drive (which was all the space I needed) on eBay for $25, installed it, cruised a couple of Linux forums to learn how to (1) use sfdisk to copy the partition structure of my existing disk to the new disk, and (2) use mdadm to add the new disk to my RAID1 array, and about 15 minutes later, my array was rebuilt and my Web server was healthy again.

There are two takeaways from this story:

First, the Internet is a wonderful thing, with amazing resources that can help even a neophyte like me to find enough information to pull my ample backside out of the fire and get my system running again.

Second, all those folks out there whom we sometimes make fun of and accuse of not having a life are actually producing some amazing stuff. I don’t know the guys behind the SuperGRUB project. They may or may not be stereotypical geeks. I don’t know how many late hours were burned, nor how many Twinkies or Diet Cokes were consumed (if any) in the production of the SuperGRUB utility. I do know that it was magical, and saved me many hours of work, and for that, I am grateful. (I’d even ship them a case of Twinkies if I knew who to send it to.) If you ever find yourself in a similar situation, it may save your, um, bacon as well.