The guys from IBM can’t seem to catch a break. First it was a spectacular crash and burn in New Zealand, now it’s the State of Texas dropping IBM.
13 days of outage for a government agency of any sort is painful enough. IBM however made it worse by blaming the outage and data loss on an old piece of SAN equipment that they took over from Texas as part of their contract to upgrade and replace the systems in question. They knew the equipment was suspect and simply didn’t bother to put in place even a temporary backup until it was replaced.
Self inflicted catastrophes are the worst. It doesn’t matter whose equipment it was originally, once you take that kind of responsibility for it, you take the lumps when it cracks up. I’m sure TX wasn’t paying for temporary backup space for this thing, but it was an easy way to look like a hero and would have been far cheaper than what they lost. Instead they end up as the goat, not to mention the credibility whack from knowing it was an extreme risk and allowing it to happen.
How much is it worth to your data center to be the hero and not the goat?
Vern, SwiftWater Telecom