Tag Archives: IBM

Friday data center tidbits: what does Google Wave say about cloud computing? Nothing.


First up today is a piece trying to make a connection between the demise of Google Wave and cloud computing. One of the strong points about cloud computing is that it ISN’T different enough not to integrate with existing data center facilities. You think you’re having trouble getting people to adopt cloud computing now, try telling them they have to do a forklift upgrade of the data center and it’s a one way jump.

Next up is the piece about IBM defending it’s performance on it’s data center contract with the State of Texas. This is the same bunch of dorks that allowed a storage array they KNEW was failing to go down and impact the Texas elections system just prior to the last election. Shut up, do it right, or don’t do it at all.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Wednesday data center tidbits.


Today I was reading about ServerBeach offering free body piercings with every new server order. Everyone lined up to get theirs?

Next up is the piece about Verizon and IBM teaming up for secure data storage services. First, if the storage was any good at all, why would it need IBM to monitor it every day to make sure the data’s still there? And this was the bunch of doofs that blew Texas out of the water in 2009 by failing to back up a storage array that they knew was iffy. I don’t think I’d depend on them to monitor a copy of yesterday’s National Enquirer.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services (and you don’t have to get punctured to get it)!

Vern

swiftwater telecom rcs cloud computing logo

Tuesday data center tidbits.


Today we have the strange revelation, that, while Facebook is touting the use of renewable power resources in its Oregon data center, the utility supplying them is actually mostly coal fired. Cheap? Probably. Renewable? Not unless you’ve got millions of years. Green? Not a chance. Greenwash? Sounds like it to me.

Next up is the new IBM data center exhibit at Disney. Not that I havn’t been in a few data centers that were akin to a thrill ride (or a house of horrors), but I can’t imagine this being a big attraction (now festoon those IBM cabinets with iPads and you might have an attraction!).

Vern

swiftwater telecom rcs cloud computing logo

Cloud computing not vaporizing anytime soon.


I just read a provocative piece about why cloud computing will vaporize. I think the oracle is out to lunch on this one.

First of all, we get an apples to oranges comparison to a technology that couldn’t deliver on its hype. Over priced and not solving any problem is certainly a recipe for failure, that’s for sure.

On the other hand, we find that cloud computing is pretty much delivering on everything promised. Energy saving (making it a critical component of any green data center plan) , cost saving, highly flexible, easy to deploy virtual servers on it, highly reliable (if done right), disaster recovery and avoidance, there’s a ton of benefits to cloud computing, all being proven in real action.

The next assertion is that you can’t build a sustainable business selling capacity unless you have a “distinct advantage”. I’m sure all the data center co-location, hosting, and dedicated server providers out there are surprised to know this. The infrastructure isn’t just going to appear magically and most of the people using it are NOT going to be building their own. I don’t believe the market for cloud capacity is just going to settle out on the 900lb gorillas of giant corporations, any more than the traditional data center market did. There’s certainly plenty of little guys making it out there.

In general, the largest cloud computing providers are the easiest to differentiate against. Amazon, Google, Microsoft, Rackspace, IBM, and others all have had repeated major self inflicted cloud outages in 2009 with major impact to their customers and just awful customer service response to them. The easiest differentiator is doing it well when the competition is screwing it up royally. It’s actually easy to remove the big boy’s differentiators, since I can easily build my own cloud with Amazon’s EC2/S3 (Eucalyptus) interface or Google’s App Engine (AppScale) interface.

As for there being a “zillion” cloud computing providers out there, I’ll believe it when I see proof of the number. Ones who do it well will survive, ones who do it poorly will not, just the way it’s always been. I don’t believe that only people who build the apps also will survive. I do believe you have to educate people how to apply cloud computing and that’s what I do with our cloud.

In light of the incredible demand, I’m not giving up on our cloud anytime soon. I think the original author needs something besides goat entrails before making his next prediction.

Vern

swiftwater telecom cloud computing logo

The reliable data center, save your customers from themselves.


Tonight I’ve been reading about why running IT as a business is a train wreck. The focus of the article is corporate IT, but I think similar consequences apply to data center providers too.

The idea of the article is that the current thinking that corporate IT departments are vendors with the rest of the corporation as their customers. Of course, we’ve all heard that the customer is always right. The problem is when you have customers who don’t know what they’re doing speccing the things they want. How can you convince them they need a $1000 server when they see $200 machines in WalMart or everyone becomes an expert network engineer because they set up their home Internet connection. The end result is that IT departments end up as nothing but grunts and substandard results.

Recently, I’ve been seeing this happen with data center providers and data center and networking projects. It seems like everyone wants to just bid to an RFP without going to the trouble of explaining to the end customer that they’re doing some or asking for something that is going to put them at high risk for trouble. The examples I can think of are IBM and their massive black eye from the Texas state data center consolidation project because they didn’t bother to back up what they knew was a risky piece of storage equipment or Northop Grumman’s fiasco in Virginia, resulting in 4700 hours of network outages in state offices in 6 months because they implemented an RFP that didn’t call for any network connection backups.

We’re not just sitting here taking orders for Big Macs, we’re the people with the expertise to do it right. If we accept and implement what we know is a bozo request without trying to correct it with the customer, that makes us bozos too. It doesn’t matter that we performed to contract, there’s enough stuff flying around, we’re all getting covered with it.

If you can’t change the course of things and it’s that risky, it’s better to let the customer walk than to get embroiled in a data center nightmare of angry customer and awful PR.

Give the customer the benefit of your expertise as a data center or networking professional, be prepared to educate them, and don’t let them walk off a cliff just because the “customer is always right”. You’ll end up with happy customers, succesful projects, a great reputation, and your competiton will take all the nightmares off your hands.

Vern

Wednesday data center tidbits.


I was just reading about the revision of the state of Texas data center contract with IBM. What struck me about this was the statement that state departments had complained about IBMs performance 800 times in only 2 years. This wasn’t just an isolated, this was about as far wrong as something could go. Yikes!

Next up is the story about making hypervisors trustworthy. I don’t think I’ve ever seen any piece of writing so totally fact free. A quick check of the product shows that it’s a configuration management tool, not a fix for hypervisors full of bandaids and security problems as the orginal article states. To borrow from South Park, I call shenanigans!

Vern, SwiftWater Telecom

The data center in review, top 10 bozos of the year, 2009!


My coined term from my data center analysis and commentary this year: bozosity. Bozosity is a condition brought on by an imbalance of invisible particles known as bozons. This condition causes otherwise competent and sensible people to do incomprehensibly boneheaded things.

The Winners of the 2009 Data Center Bozo Awards are:

1. Microsoft and Danger for the T-Mobile Sidekick data loss debacle. MicroDanger did not win for operating critical storage systems without backups, but for the handling of the aftermath. MicroDanger immediately announced all data was lost, then, by the time they did recover most of the data, significant damage was done to T-Mobile and the sidekick, leaving everyone involved with a reputation for incompetence.

2. Fisher Plaza for knocking out major Internet services by blowing up an antquated, obsolete, and improperly maintained electrical system in their data center building. Aluminum bus bars are evil, k?

3. IBM for blowing Air New Zealand out of the water by performing power work during peak load period of one of Air New Zealand’s busiest travel weekends, unnecessarily running ANZ’s mainframe from a fallible generator alone, and taking an inordinate amount of time to restore service.

4. IBM for allowing a state of Texas elections commission storage system in their care to fail because it wasn’t in the contract to back it up.

5. Google for their brilliant example of cascading failure by sequentially overloading every router feeding their Gmail service.

6. Research in Motion for seeing how many BlackBerry back end failures they could squeeze in before the end of the year.

7. Amazon, Rackspace, Google, and a number of other providers who managed to blacken the term cloud computing by multiple reliability problems, most of which were self inflicted. Thanks a heap guys.

8. DreamHost for giving us a shining example of how NOT to do a major data center migration.

9. The people who operate Sweden’s top level DNS domain for turning lose an untested script and taking the entire thing down. Who knew a few missing dots could be so much trouble?

10.The Perth iX data center in western Australia for allowing a smoldering mulch bed outside the building to shut down the entire data center because they couldn’t locate a miniscule amount of smoke that was infiltrating the building and setting off an overly sensitive detection system.

Finally, I’d like to add a “dishonorable mention” award to FedEx for turning overnight delivery of a critical part into 3 days and nearly sticking me with working in the data center overnight Christmas Eve.

Looks like we survived the year but it sure wasn’t pretty.

Vern, SwiftWater Telecom

Monday data center tidbits.


First item up is the story that Virginia has had massive service failures in state departments because their $2.3B outsourcing deal with Northrop Grumman neglected to require network redundancy. What is it with providers these days just happily letting customers walk off the cliff? 12 outages and 100 hours of downtime in 5 WEEKS! It’s like IBM and Texas where IBM knew a piece of equipment was problematic and allowed it to fail without backup because backup wasn’t specified. Go the extra mile and be the hero, people!

Worst capacity planning award: Ebay kills its search functionality for most of the day Saturday. Gee, people are going to do more selling coming up to Christmas, who’d have thought?

Vern, SwiftWater Telecom

data center facility engineering
data center DC power plant engineering

Tuesday data center tidbits.


This morning I was reading about IBM working on chip level water cooling where the water actually flows through channels in the silicon. I can just see trying to diagnose leaks INSIDE the chip body (“Hello, Roto Rooter?”)

I just happened across a poll asking if anyone had had failures in their data center from corrosion and if they were using a corrosive gas monitoring system. If you have enough corrosive gas in your data center to cause equipment failures, you better be sending your techs in in hazmat suits,

Vern, SwiftWater Telecom

data center facility engineering

Is your data center the hero or the goat?


The guys from IBM can’t seem to catch a break. First it was a spectacular crash and burn in New Zealand, now it’s the State of Texas dropping IBM.

13 days of outage for a government agency of any sort is painful enough. IBM however made it worse by blaming the outage and data loss on an old piece of SAN equipment that they took over from Texas as part of their contract to upgrade and replace the systems in question. They knew the equipment was suspect and simply didn’t bother to put in place even a temporary backup until it was replaced.

Self inflicted catastrophes are the worst. It doesn’t matter whose equipment it was originally, once you take that kind of responsibility for it, you take the lumps when it cracks up. I’m sure TX wasn’t paying for temporary backup space for this thing, but it was an easy way to look like a hero and would have been far cheaper than what they lost. Instead they end up as the goat, not to mention the credibility whack from knowing it was an extreme risk and allowing it to happen.

How much is it worth to your data center to be the hero and not the goat?

Vern, SwiftWater Telecom

data center facility engineering