Tag Archives: data center dc power

Building out the data center the right way.


Tonight I’ve been reading an article about data center building trends. There’s some very good points to this and also some things that I think are very wrong. These also explain some things that have mystified me for some time.

Looking ahead 5 years for capacity planning isn’t a bad idea (except that the data center needs to be flexible enough to accommodate the changes that can happen in 5 years), but the whole decision on build out or not for data center infrastructure in advance hinges on the penalty for doing so. In short, there’s no penalty for building out passive infrastructure and a big penalty for building out active infrastructure.

I’ve been mystified by the idea that data center PUE (power usage effectiveness) only gets good when a data center is full. Now I understand, this is based on the idea of a data center building out (and operating) 100% of it’s cooling infrastructure in advance. If you’re only running 20% of your 5 year forecasted server capacity but you have to run 100% of your 5 year forecasted cooling capacity because it’s a monolithic system that’s either on or off, of course your efficiency is going to stink!

The PUE numbers for that kind of arrangement would be pathetic. Of course, as you add servers with the same amount of cooling power usage, the PUE would gradually get better, but who in the world would really WANT to run something that way? (Reference last year’s story about UPS cutting their data center power by switching off 30 out of 65 HVAC units!)

Leaving plenty of extra room for the transformer yard and the generator yard is a great idea (you can’t expand them once you get things built in around them). On the other hand, it would be silly to build out and power up a yard full of transformers that were sitting there doing nothing except chewing up power.

So, what sort of data center infrastructure things can safely be built out far ahead of the actual need? The physical building is a bit of a no-brainer, as long as the active sections can be isolated from the idle (you don’t need to be cooling an entire hall that’s only 10% full).

Passive electrical is another good one. This means entrances, disconnects, breaker panels, distribution cabling, transfer switches. No UPS, no DC power plants unless you’re going to be able to shut them off and leave them off until you really need them.

Passive cooling infrastructure such as ducts. Take a lesson from UPS, do NOT build out double the HVAC you need and run it all!

Finally, build out the support structures for the transformer and generator yards. Mounting pads, conduit, cabling, so the equipment is all set to drop, hook up, and go.

Don’t kill your data center’s efficiency in return for capacity 5 years from now.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Advertisements

Wednesday data center tidbits: no power backup in the #datacenter?


First up today is about the idea of building a data center with no power backup at all. This is about as boneheaded a thing as I’ve ever seen. Does it REALLY pay you to not only duplicate but run extra infrastructure so you can save a bit in equipment costs by letting a data center fail? What about the cost of restoring all the downed equipment? Or the damage to equipment from wild power fluctuations that a battery backed system (such as our 48V DC power plant in our data center) would absorb? Squeaky red noses to Yahoo on this one.

Next up is a piece about improving data center airflow. What caught my eye was this, “…flowing air from the CRAC unit, through specialized floor tiles and into the server fans…”. Attempting to tout cooling efficiency with a hopelessly obsolete raised floor is an automatic FAIL.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

Extreme weather, #datacenter DC power, and #cloudcomputing.


Or, as the alternate title that comes to mind, “Mother Nature throws a hissy fit”. I’m going to talk in this post about how all of the above link together to affect data center and cloud computing reliability.

This year, it seem like the news is full of stories of violent weather around the country and it only seems to be getting worse. Even areas that traditionally have been fairly stable weather wise have been showing massive storms with damaging winds and flooding rains. For the first time in the 6 years I’ve been in this location (not to mention most of my life in this state), we’ve had 2 major spring/early summer storms with winds in excess of hurricane force.

So, how does this relate to data center and cloud computing reliability? The last storm materialized in only 5 minutes, produced torrential downpours and 100 mph winds, and caused large amounts of havoc with the commercial AC power supply to the data center. I’m a great proponent of green DC power in the data center so the power distribution is primarily DC with AC equipped with good quality traditional protection for the rest.

Unfortunately, the AC power excursion events from the severe weather were wild enough that the classic power protection turned out to be inadequate. The cloud computing servers themselves, powered by DC as they are, survived just fine. Both the primary and backup storage systems, powered from the AC, did not.

After several days of cleaning up the mess and getting the cloud restored and back on line, there are a number of takeaways from this.

1. It’s hard to go overboard engineering your data center for extreme weather, whether it’s likely to happen or not.

2. Data center DC power is a LOT more resilient than the best protected AC power. As a result of this, all required AC powered equipment is now on the DC power via inverters. This isn’t as green of course, but it isolates the equipment much better from radical power fluctuations in the data center AC supply.

3. In a cloud computing environment, make sure all the parts of the cloud have the same level of resiliency. There’s no point to keeping the front end alive when the back end goes down.

Finally, I’ve talked in a previous post about using DC power with a long run battery string to shift heat load in the data center. A DC power system with a long run time is also great protection against this type of event. No matter how fast or unexpected the severe weather is, simply shut down the rectifiers in minutes, run from the batteries, and you have the ultimate in isolation from AC power excursions.

Or, we could just write Mother Nature a prescription for Prozac.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

Data center DC power, power backup runtime, and free air cooling, a greener shade of green.


I’ve written previously here about the green, energy saving benefits of DC power in the data center, the reliability follies of super short run time power backup, and, of course, the well recognized benefits of free air cooling. In this post, I’m going to discuss making the best green use of all three of these together in the data center.

The “classic” data center power backup system is the double conversion UPS. In this scenario, commercial AC power is rectified to DC for the backup batteries and then inverted back to AC to supply the data center equipment. This configuration actually has three points of efficiency loss, the rectifiers for the AC to DC, the inverters for DC to AC, and the load power supplies for AC to DC again. The data center DC power plant does away with 2/3 of the efficiency loss by eliminating the DC to AC inverter and the AC to DC power supply in the load equipment.

The second part of this equation is the backup power itself. The trend to incredibly short run time backup power (such as flywheels with only 15 seconds of run time) is a foolish gamble that fallible generators are going to work perfectly every time. If there’s even a small issue that could easily be dealt with, there simply is no time and the facility is going down hard.

The third part is the free air cooling. It really goes without saying that using cooler outside air for cooling is far more efficient than any type of traditional data center air cooling.

So, how do these three things tie together to make the data center even greener than any one separately? Many data centers use load shifting to save power costs (such as freezing water at night when power is cheaper to cool with during the day). I call this technique heat shifting.

My data center is equipped with an 800A 48VDC modular power plant equipped N+1, a battery string capable of 8 hours of run time, and free air cooling. The idea is to simply pick the hottest part of the day (usually early afternoon) and remove heat load from the free air cooling by shutting down the rectifiers and running the facility from the battery string for 2 hours.

This shifts that part of the heat load of the data center to times when the free air cooling is operating more efficiently, allowing the free air cooling the elbow room to support more productive equipment load. Additionally, you have the side effect of exercising the batteries regularly, avoiding the ills that can plague idle batteries, such as stratification and sulfation.

As if there weren’t already enough great reasons to use green DC power, long run backup, and free air cooling in the data center, here’s another one.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Monday data center tidbits. #cloudcomputing rants, #datacenter DC power


First up is reading a rant about losing data in “the cloud” and how cloud computing is unreliable. I’ve got two pieces of news: banks have been screwing up customer’s online accounts and transactions long before cloud computing came along and there’s no indication (and it’s quite unlikely) that a bank is using any form of cloud computing for customer accounts and financial transactions. Blaming an obviously human error on “the cloud” and cloud computing is just silly cloudwashing.

Next up is reading about DC power distribution in the data center. The idea of “standardizing” on 380VDC for the voltage is a curious one. Presuming you’re going to float a battery string on this for backup as is standard practice, and that most batteries are built as multiples of 6 cells, you’d expect to want a voltage divisible by 6 (12/24/48VDC etc). Unless you’re planning to build your battery string from individual 2VDC cells, skip this turkey.

Email or call me or visit the SwiftWater Telecom web site for sensible green data center DC power engineering!

Vern

swiftwater telecom rcs cloud computing logo

Monday data center tidbits.


The “duh” award for most obvious statement of the day comes from this article about the next steps for green IT:

“That capability could save energy because a computer that’s off, most experts agree, is more efficient than one that’s in sleep mode ..”

Um, DUH! I’m glad we had experts to reveal this.

From the same article, we get duh #2, the idea that data center servers that will accept high AC voltages such as 480VAC are more efficient. Until they make chips that use 500V, the voltage STILL has to be stepped down. All you’re doing with the scenario is moving the inefficiency (and all the heat load) inside the server, the last place in the world that you want it.

If you want to get rid of the transformer penalties of AC, go DC in the data center and forget shuffling the penalty around.

Email or call me or visit the SwiftWater Telecom web site for green data center DC power services!

Vern

swiftwater telecom rcs cloud computing logo

Lightning and the data center (GNAX goes boom).


Sounds like a bad sitcom, doesn’t it? I don’t think it’s close to what the folks who are affected by the lightning related GNAX data center outage are saying.

This is one area that doesn’t get talked about where the data center DC power plant really shines. When the classic power protection technology of surge suppressors and lightning arrestors aren’t enough, the data center DC power plant helps to isolate the server loads from bad things that happen on the AC main power. Couple that with the ability of the DC power plant battery string to absorb spikes and improve power quality and you have an optimal environment for your equipment that’s difficult to match (compare this to a modern “eco mode” AC UPS that ties your servers directly to the commercial AC power most of the time).

Add in the ability to easily replace DC power plant equipment without interruption power to the server loads and you can sleep easy the next time it rumbles out.

Email or call me or visit the SwiftWater Telecom web site for green and reliable data center DC power engineering, construction, and operation.

Vern

swiftwater telecom rcs cloud computing logo