Jackass: data center edition


Vern Burke, SwiftWater Telecom
Biddeford, Maine

I was just reading a piece about data center temperature levels, cooling, and the ASHRAE recommendations. It’s not the ASHRAE recommendations that are the problem here.

First of all, is anyone really still running servers that can’t handle the ASHRAE recommended maximum inlet temperature of 80 degrees for 20 minutes out failing the hardware? My oldest machines in service are older Rackable Systems 2x dual core Opteron 1U boxes with 2004 bios dates on the Tyan motherboards. These servers are not only perfectly happy at the ASHRAE maximum recommendation, they will run rock solid for months at high 90 degree inlet temperatures in free air cooling configurations.

The next thing is, a 30 degree inlet to outlet temperature differential is “good”? Really? My old Rackables with their highly congested 1U cases produce 13 degrees differential between inlet and outlet. If you have a server gaining 30 degrees at the outlet, you have a server that has a severe problem with its internal airflow. Of course, restricting airflow will make the server fans work harder, driving up the amount of power they require.

So, the original poster pulled a truly jackass stunt:

1. He “accepted” and commissioned a new data center without properly testing the systems.

2. He ran live servers in a data center with no power backup for the cooling systems, only the servers themselves.

3. He allowed servers to run to failure in a completely defective environment. Of course, we’re never told what the actual inlet temperature was on the servers when they failed, it could have been far higher than the ASHRAE recommended maximum.

The problem here isn’t the inlet temperature recommendation, it’s a defective data center design combined with defective operations (did anyone do a maintenance plan before running that fire alarm test?).

I guess if you can’t figure out that backup power for the servers isn’t adequate to be running anything without backup power for the cooling, then you should unload lots of money running your data center temperature low enough to give you time to fix a totally preventable goof.

As for the rest of us, we’ll avoid designing and operating in jackass mode.

About these ads

4 responses to “Jackass: data center edition

  1. Interesting piece.

    Many large enterprises do have a lot of old kit. NT4’s not uncommon. That said, the smart approach is surely to move to public cloud based compute power wherever possible. This will mean changing the way that applications are built, but the economics are compelling and the FUD that’s pushing private clouds will eventually evaporate – accelerated as success stories from the public cloud proliferate.

    • That would be the most sensible approach, but I figured anyone who would rather unload tons of cash over cooling the data center for fragile obsolete servers rather than update to something modern isn’t going to be swayed by the economics of cloud computing :).

      Vern

  2. You’ve seen your own datacenter, right? And you are passing judgement on real facilities?

    • Well, let’s see. My cooling is power backed up, I have enough leeway to fix cooling issues before I’m in any danger of equipment failure, and I’ve never had a single piece of equipment of any sort fail from environmental issues. I guess that qualifies me to make the comment. By the way, you feel qualified to pass judgement on me with fake user info and a fake email address?

      Vern

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s