Tag Archives: UPS

Wednesday data center tidbits: no power backup in the #datacenter?


First up today is about the idea of building a data center with no power backup at all. This is about as boneheaded a thing as I’ve ever seen. Does it REALLY pay you to not only duplicate but run extra infrastructure so you can save a bit in equipment costs by letting a data center fail? What about the cost of restoring all the downed equipment? Or the damage to equipment from wild power fluctuations that a battery backed system (such as our 48V DC power plant in our data center) would absorb? Squeaky red noses to Yahoo on this one.

Next up is a piece about improving data center airflow. What caught my eye was this, “…flowing air from the CRAC unit, through specialized floor tiles and into the server fans…”. Attempting to tout cooling efficiency with a hopelessly obsolete raised floor is an automatic FAIL.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

Wednesday data center additional, Peer 1 tech injured in electrical accident


I’ve written a good deal about avoiding risky manual “maintenance” procedures on open, live electrical equipment in the data center. Now we have another good reason to avoid it in the current story of a Peer 1 tech being injured during UPS maintenance. Injury to the person working inside the equipment from a short is a pretty clear indicator of not using adequate arc flash protection. Arc flash from a short (usually the result of a tool where it shouldn’t be) can burn flesh, burn clothes, blind eyes, or eject metal shrapnel. So now you have the perfect storm, service outage, damaged equipment, injured employee, and your fav government labor safety organisation (OSHA here in the US) all up in your grill and ready to kick butt, all for a risky maintenance procedure done on energized equipment.

Unless you enjoy the pain of this kind of fall out, stop messing around inside energized power equipment in the data center!

Vern, SwiftWater Telecom

Green data centers and the redundancy follies.


This morning I was reading about green data center tips from UPS. The ideas are pretty much valid, if stock, but how did they get the data center so far UN-green in the first place?

Improving air flow is always a good bet for the data center. It’s something that I think has to be an ongoing process as well. A lot of data centers that might have started out well have had so much added to them as they’ve aged that an originally efficient design can devolve into chaos. Remember, changes have to work WITH the data center, not against it.

Lighting efficiency is also another low hanging fruit, especially when the existing lighting has been in place for some time (almost 15 years in this case). Super efficient LED lighting (and integrating it with a data center wide green DC power distribution system) can show up the old lighting for the serious energy hog it really is.

The thing that really floored me was the statement that they shut down 28 out of 65 air handlers with no impact on the data center. This means they were operating almost DOUBLE the amount of cooling capacity that the data center actually required. This was excused as overbuilding for redundancy. There’s a difference between prudent redundancy and crazy.

The redundancy in this data center was apparently built as 2N (where N is the capacity required to operate) instead of N+1. 2N redundancy completely duplicates the operating infrastructure. If you need 2 generators for backup, you buy 4 generators. If you need 30 air handlers, you buy 60.

N+1 redundancy recognizes that only a small part of any infrastructure system has any chance of failing at once. Odds are vanishingly small that you’re going to have 30 air handlers all fail at the same time. For N+1, you consider what are the odds of having more than 1 unit fail, the odds of having another unit fail before you can get the first failure repaired, and the potential impact.

If there’s small chance of 2 air handlers failing at the same time and the time to repair is short enough that 2 failing would not affect the data center significantly before restoration, go with 1 extra unit (N+1). The higher the odds of multiple failures vs longer repair time vs larger impact may make N+2, N+3, etc more appropriate. I can’t think of any modular system where 2N wouldn’t be a complete waste.

Setting aside the massive wasted capital expenditure for all the unnecessarily duplicated equipment, plus the consequences of the embodied carbon in it, the worst part by far is that they were RUNNING all of this duplicate gear. It was sitting there sucking up major amounts of power for no good reason at all.

If you’re looking at a system where absolutely no interruption is critical, such as power, you might run N+2 with 1 spare hot and the other 1 cold. In the case of air handlers, the minor delay involved in starting a spare one in case of failure is no impact at all. Running 30 extra air handlers just because they’re there is a terrible waste of money, not to mention carbon pollution.

Building for redundancy in the data center is definitely a good idea. Just do it sensibly and you won’t have to undo it later in the name of greening the data center.

Vern, SwiftWater Telecom