Monthly Archives: April 2011

You are browsing the site archives by month.

How’s That “Cloud” Thing Working For You?

Color me skeptical when it comes to the “cloud computing” craze. Well, OK, maybe my skepticism isn’t so much about cloud computing per se as it is about the way people seem to think it is the ultimate answer to Life, the Universe, and Everything (shameless Douglass Adams reference). In part, that’s because I’ve been around IT long enough that I’ve seen previous incarnations of this concept come and go. Application Service Providers were supposed to take the world by storm a decade ago. Didn’t happen. The idea came back around as “Software as a Service” (or, as Microsoft preferred to frame it, “Software + Services”). Now it’s cloud computing. In all of its incarnations, the bottom line is that you’re putting your critical applications and data on someone else’s hardware, and sometimes even renting their Operating Systems to run it on and their software to manage it. And whenever you do that, there is an associated risk – as several users of Amazon’s EC2 service discovered just last week.

I have no doubt that the forensic analysis of what happened and why will drag on for a long time. Justin Santa Barbara had an interesting blog post last Thursday (April 21) that discussed how the design of Amazon Web Services (AWS), and its segmentation into Regions and Availability Zones, is supposed to protect you against precisely the kind of failure that occurred last week…except that it didn’t.

Phil Wainewright has an interesting post over at ZDnet.com on the “Seven lessons to learn from Amazon’s outage.” The first two points he makes are particularly important: First, “Read your cloud provider’s SLA very carefully” – because it appears that, despite the considerable pain some of Amazon’s customers were feeling, the SLA was not breached, legally speaking. Second, “Don’t take your provider’s assurances for granted” – for reasons that should be obvious.

Wainewright’s final point, though, may be the most disturbing, because it focuses on Amazon’s “lack of transparency.” He quotes BigDoor CEO Keith Smith as saying, “If Amazon had been more forthcoming with what they are experiencing, we would have been able to restore our systems sooner.” This was echoed in Santa Barbara’s blog post where, in discussing customers’ options for failing over to a different cloud, he observes, “Perhaps they would have started that process had AWS communicated at the start that it would have been such a big outage, but AWS communication is – frankly – abysmal other than their PR.” The transparency issue was also echoed by Andrew Hickey in an article posted April 26 on CRN.com.

CRN also wrote about “lessons learned,” although they came up with 10 of them. Their first point is that “Cloud outages are going to happen…and if you can’t stand the outage, get out of the cloud.” They go on to talk about not putting “Blind Trust” in the cloud, and to point out that management and maintenance are still required – “it’s not a ‘set it and forget it’ environment.”

And it’s not like this is the first time people have been affected by a failure in the cloud:

  • Amazon had a significant outage of their S3 online storage service back in July, 2008. Their northern Virginia data center was affected by a lightning strike in July of 2009, and another power issue affected “some instances in its US-EAST-1 availability zone” in December of 2009.
  • Gmail experienced a system-wide outage for a period of time in August, 2008, then was down again for over 1 ½ hours in September, 2009.
  • The Microsoft/Danger outage in October, 2009, caused a lot of T-Mobile customers to lose personal information that was stored on their Sidekick devices, including contacts, calendar entries, to-do lists, and photos.
  • In January, 2010, failure of a UPS took several hundred servers offline for hours at a Rackspace data center in London. (Rackspace also had a couple of service-affecting failures in their Dallas area data center in 2009.)
  • Salesforce.com users have suffered repeatedly from service outages over the last several years.

This takes me back to a comment made by one of our former customers, who was the CIO of a local insurance company, and who later joined our engineering team for a while. Speaking of the ASPs of a decade ago, he stated, “I wouldn’t trust my critical data to any of them – because I don’t believe that any of them care as much about my data as I do. And until they can convince me that they do, and show me the processes and procedures they have in place to protect it, they’re not getting my data!”

Don’t get me wrong – the “Cloud” (however you choose to define it…and that’s part of the problem) has its place. Cloud services are becoming more affordable, and more reliable. But, as one solution provider quoted in the CRN “lessons learned” article put it, “Just because I can move it into the cloud, that doesn’t mean I can ignore it. It still needs to be managed. It still needs to be maintained.” Never forget that it’s your data, and no one cares about it as much as you do, no matter what they tell you. Forrester analyst Rachel Dines may have said it best in her blog entry from last week: “ASSUME NOTHING. Your cloud provider isn’t in charge of your disaster recovery plan, YOU ARE!” (She also lists several really good questions you should ask your cloud provider.)

Cloud technologies can solve specific problems for you, and can provide some additional, and valuable, tools for your IT toolbox. But you dare not assume that all of your problems will automagically disappear just because you put all your stuff in the cloud. It’s still your stuff, and ultimately your responsibility.

DataCore Lowers Prices on SANsymphony-V

Back at the end of January, DataCore announced the availability of a new product called SANsymphony-V. This product replaces SANmelody in their product line, and is the first step in the eventual convergence of SANmelody and SANsymphony into a single product with a common user interface.

Note: In case you’re not familiar with DataCore, they make software that will turn an off-the-shelf Windows server into an iSCSI SAN node (FibreChannel is optional) with all the bells and whistles you would expect from a modern SAN product. You can read more about them on our DataCore page.

We’ve been playing with SANsymphony-V in our engineering lab, and our technical team is impressed with both the functionality and the new user interface - but that’s another post for another day. This post is focused on the packaging and pricing of SANsymphony-V, which in many cases can come in significantly below the old SANmelody pricing.

First, we need to recap the old SANmelody pricing model. SANmelody nodes were priced according to the maximum amount of raw capacity that node could manage. The full-featured HA/DR product could be licensed for 0.5 Tb, 1 Tb, 2 Tb, 3 Tb, 4 Tb, 8 Tb, 16 Tb, or 32 Tb. So, for example, if you wanted 4 Tb of mirrored storage (two 4 Tb nodes in an HA pair), you would purchase two 4 Tb licenses. At MSRP, including 1 year of software maintenance, this would have cost you a total of $17,496. But what if you had another 2 Tb of archival data that you wanted available, but didn’t necessarily need it mirrored between your two nodes? Then you would want 4 Tb in one node, and 6 Tb in the other node. However, since there was no 6 Tb license, you’d have to buy an 8 Tb license. Now your total cost is up to $21,246.

SANsymphony-V introduced the concept of separate node licenses and capacity licenses. The node license is based on the maximum amount of raw storage that can exist in the storage pool to which that node belongs. The increments are:

  • “VL1″ - Up to 5 Tb - includes 1 Tb of capacity per node (more on this in a moment)
  • “VL2″ - Up to 16 Tb - includes 2 Tb of capacity per node
  • “VL3″ - Up to 100 Tb - includes 8 Tb of capacity per node
  • “VL4″ - Up to 256 Tb - includes 40 Tb of capacity per node
  • “VL5″ - More than 256 Tb - includes 120 Tb of capacity per node

In my example above, with 4 Tb of mirrored storage and 2 Tb of non-mirrored storage, there is a total of 10 Tb of storage in the storage pool: (4 x 2) + 2 = 10. Therefore, each node needs a “VL2″ node license, since the total storage in the pool is more than 5 Tb but less than 16 Tb. We also need a total of 10 Tb of capacity licensing. We’ve already got 4 Tb, since 2 Tb of capacity were included with each node license. So we need to buy an additional six 1 Tb capacity licenses. At MSRP, this would cost a total of $14,850 - substantially less than the old SANmelody price.

The cool thing is, once we have our two VL2 nodes and our 10 Tb of total capacity licensing, DataCore doesn’t care how that capacity is allocated between the nodes. We can have 5 Tb of mirrored storage, we can have 4 Tb in one node and 6 Tb in the other, we can have 3 Tb in one node and 7 Tb in the other. We can divide it up any way we want to.

If we now want to add asynchronous replication to a third SAN node that’s off-site (e.g., in our DR site), that SAN node is considered a separate “pool,” so its licensing would be based on how much capacity we need at our DR site. If we only cared about replicating 4 Tb to our DR site, then the DR node would only need a VL1 node license and a total of 4 Tb of capacity licensing (i.e., a VL1 license + three additional 1 Tb capacity licenses, since 1 Tb of capacity is included with the VL1 license).

At this point, no new SANmelody licenses are being sold - although, if you need to, you can still upgrade an existing SANmelody license to handle more storage. If you’re an existing SANmelody customer with current software maintenance, rest assured that you will be entitled to upgrade to SANsymphony-V as a benefit of your software maintenance coverage. However, there will not be a mechanism that allows for an easy in-place upgrade until sometime in Q3. In the meantime, an upgrade from SANmelody to SANsymphony-V would entail a complete rebuild from the ground up. (Which we would be delighted to do for you if you just can’t wait for the new features.)