Tag Archives: data center dc power

Building out the data center the right way.


Tonight I’ve been reading an article about data center building trends. There’s some very good points to this and also some things that I think are very wrong. These also explain some things that have mystified me for some time.

Looking ahead 5 years for capacity planning isn’t a bad idea (except that the data center needs to be flexible enough to accommodate the changes that can happen in 5 years), but the whole decision on build out or not for data center infrastructure in advance hinges on the penalty for doing so. In short, there’s no penalty for building out passive infrastructure and a big penalty for building out active infrastructure.

I’ve been mystified by the idea that data center PUE (power usage effectiveness) only gets good when a data center is full. Now I understand, this is based on the idea of a data center building out (and operating) 100% of it’s cooling infrastructure in advance. If you’re only running 20% of your 5 year forecasted server capacity but you have to run 100% of your 5 year forecasted cooling capacity because it’s a monolithic system that’s either on or off, of course your efficiency is going to stink!

The PUE numbers for that kind of arrangement would be pathetic. Of course, as you add servers with the same amount of cooling power usage, the PUE would gradually get better, but who in the world would really WANT to run something that way? (Reference last year’s story about UPS cutting their data center power by switching off 30 out of 65 HVAC units!)

Leaving plenty of extra room for the transformer yard and the generator yard is a great idea (you can’t expand them once you get things built in around them). On the other hand, it would be silly to build out and power up a yard full of transformers that were sitting there doing nothing except chewing up power.

So, what sort of data center infrastructure things can safely be built out far ahead of the actual need? The physical building is a bit of a no-brainer, as long as the active sections can be isolated from the idle (you don’t need to be cooling an entire hall that’s only 10% full).

Passive electrical is another good one. This means entrances, disconnects, breaker panels, distribution cabling, transfer switches. No UPS, no DC power plants unless you’re going to be able to shut them off and leave them off until you really need them.

Passive cooling infrastructure such as ducts. Take a lesson from UPS, do NOT build out double the HVAC you need and run it all!

Finally, build out the support structures for the transformer and generator yards. Mounting pads, conduit, cabling, so the equipment is all set to drop, hook up, and go.

Don’t kill your data center’s efficiency in return for capacity 5 years from now.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Wednesday data center tidbits: no power backup in the #datacenter?


First up today is about the idea of building a data center with no power backup at all. This is about as boneheaded a thing as I’ve ever seen. Does it REALLY pay you to not only duplicate but run extra infrastructure so you can save a bit in equipment costs by letting a data center fail? What about the cost of restoring all the downed equipment? Or the damage to equipment from wild power fluctuations that a battery backed system (such as our 48V DC power plant in our data center) would absorb? Squeaky red noses to Yahoo on this one.

Next up is a piece about improving data center airflow. What caught my eye was this, “…flowing air from the CRAC unit, through specialized floor tiles and into the server fans…”. Attempting to tout cooling efficiency with a hopelessly obsolete raised floor is an automatic FAIL.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

Extreme weather, #datacenter DC power, and #cloudcomputing.


Or, as the alternate title that comes to mind, “Mother Nature throws a hissy fit”. I’m going to talk in this post about how all of the above link together to affect data center and cloud computing reliability.

This year, it seem like the news is full of stories of violent weather around the country and it only seems to be getting worse. Even areas that traditionally have been fairly stable weather wise have been showing massive storms with damaging winds and flooding rains. For the first time in the 6 years I’ve been in this location (not to mention most of my life in this state), we’ve had 2 major spring/early summer storms with winds in excess of hurricane force.

So, how does this relate to data center and cloud computing reliability? The last storm materialized in only 5 minutes, produced torrential downpours and 100 mph winds, and caused large amounts of havoc with the commercial AC power supply to the data center. I’m a great proponent of green DC power in the data center so the power distribution is primarily DC with AC equipped with good quality traditional protection for the rest.

Unfortunately, the AC power excursion events from the severe weather were wild enough that the classic power protection turned out to be inadequate. The cloud computing servers themselves, powered by DC as they are, survived just fine. Both the primary and backup storage systems, powered from the AC, did not.

After several days of cleaning up the mess and getting the cloud restored and back on line, there are a number of takeaways from this.

1. It’s hard to go overboard engineering your data center for extreme weather, whether it’s likely to happen or not.

2. Data center DC power is a LOT more resilient than the best protected AC power. As a result of this, all required AC powered equipment is now on the DC power via inverters. This isn’t as green of course, but it isolates the equipment much better from radical power fluctuations in the data center AC supply.

3. In a cloud computing environment, make sure all the parts of the cloud have the same level of resiliency. There’s no point to keeping the front end alive when the back end goes down.

Finally, I’ve talked in a previous post about using DC power with a long run battery string to shift heat load in the data center. A DC power system with a long run time is also great protection against this type of event. No matter how fast or unexpected the severe weather is, simply shut down the rectifiers in minutes, run from the batteries, and you have the ultimate in isolation from AC power excursions.

Or, we could just write Mother Nature a prescription for Prozac.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

Data center DC power, power backup runtime, and free air cooling, a greener shade of green.


I’ve written previously here about the green, energy saving benefits of DC power in the data center, the reliability follies of super short run time power backup, and, of course, the well recognized benefits of free air cooling. In this post, I’m going to discuss making the best green use of all three of these together in the data center.

The “classic” data center power backup system is the double conversion UPS. In this scenario, commercial AC power is rectified to DC for the backup batteries and then inverted back to AC to supply the data center equipment. This configuration actually has three points of efficiency loss, the rectifiers for the AC to DC, the inverters for DC to AC, and the load power supplies for AC to DC again. The data center DC power plant does away with 2/3 of the efficiency loss by eliminating the DC to AC inverter and the AC to DC power supply in the load equipment.

The second part of this equation is the backup power itself. The trend to incredibly short run time backup power (such as flywheels with only 15 seconds of run time) is a foolish gamble that fallible generators are going to work perfectly every time. If there’s even a small issue that could easily be dealt with, there simply is no time and the facility is going down hard.

The third part is the free air cooling. It really goes without saying that using cooler outside air for cooling is far more efficient than any type of traditional data center air cooling.

So, how do these three things tie together to make the data center even greener than any one separately? Many data centers use load shifting to save power costs (such as freezing water at night when power is cheaper to cool with during the day). I call this technique heat shifting.

My data center is equipped with an 800A 48VDC modular power plant equipped N+1, a battery string capable of 8 hours of run time, and free air cooling. The idea is to simply pick the hottest part of the day (usually early afternoon) and remove heat load from the free air cooling by shutting down the rectifiers and running the facility from the battery string for 2 hours.

This shifts that part of the heat load of the data center to times when the free air cooling is operating more efficiently, allowing the free air cooling the elbow room to support more productive equipment load. Additionally, you have the side effect of exercising the batteries regularly, avoiding the ills that can plague idle batteries, such as stratification and sulfation.

As if there weren’t already enough great reasons to use green DC power, long run backup, and free air cooling in the data center, here’s another one.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Monday data center tidbits. #cloudcomputing rants, #datacenter DC power


First up is reading a rant about losing data in “the cloud” and how cloud computing is unreliable. I’ve got two pieces of news: banks have been screwing up customer’s online accounts and transactions long before cloud computing came along and there’s no indication (and it’s quite unlikely) that a bank is using any form of cloud computing for customer accounts and financial transactions. Blaming an obviously human error on “the cloud” and cloud computing is just silly cloudwashing.

Next up is reading about DC power distribution in the data center. The idea of “standardizing” on 380VDC for the voltage is a curious one. Presuming you’re going to float a battery string on this for backup as is standard practice, and that most batteries are built as multiples of 6 cells, you’d expect to want a voltage divisible by 6 (12/24/48VDC etc). Unless you’re planning to build your battery string from individual 2VDC cells, skip this turkey.

Email or call me or visit the SwiftWater Telecom web site for sensible green data center DC power engineering!

Vern

swiftwater telecom rcs cloud computing logo

Monday data center tidbits.


The “duh” award for most obvious statement of the day comes from this article about the next steps for green IT:

“That capability could save energy because a computer that’s off, most experts agree, is more efficient than one that’s in sleep mode ..”

Um, DUH! I’m glad we had experts to reveal this.

From the same article, we get duh #2, the idea that data center servers that will accept high AC voltages such as 480VAC are more efficient. Until they make chips that use 500V, the voltage STILL has to be stepped down. All you’re doing with the scenario is moving the inefficiency (and all the heat load) inside the server, the last place in the world that you want it.

If you want to get rid of the transformer penalties of AC, go DC in the data center and forget shuffling the penalty around.

Email or call me or visit the SwiftWater Telecom web site for green data center DC power services!

Vern

swiftwater telecom rcs cloud computing logo

Lightning and the data center (GNAX goes boom).


Sounds like a bad sitcom, doesn’t it? I don’t think it’s close to what the folks who are affected by the lightning related GNAX data center outage are saying.

This is one area that doesn’t get talked about where the data center DC power plant really shines. When the classic power protection technology of surge suppressors and lightning arrestors aren’t enough, the data center DC power plant helps to isolate the server loads from bad things that happen on the AC main power. Couple that with the ability of the DC power plant battery string to absorb spikes and improve power quality and you have an optimal environment for your equipment that’s difficult to match (compare this to a modern “eco mode” AC UPS that ties your servers directly to the commercial AC power most of the time).

Add in the ability to easily replace DC power plant equipment without interruption power to the server loads and you can sleep easy the next time it rumbles out.

Email or call me or visit the SwiftWater Telecom web site for green and reliable data center DC power engineering, construction, and operation.

Vern

swiftwater telecom rcs cloud computing logo

Friday data center tidbits.


First up today is yet another story about data center AC vs DC power efficiency, with the usual sprinkling of BS. Required cable size is based on voltage vs current, not whether the power is AC or DC. Higher voltage means lower current means smaller conductors. AC or DC power isn’t a factor. The idea that larger cables used in lower voltage DC power are more susceptible to arcing because of their size is ridiculous. Any well designed data center DC power plant will be just as safe as any AC power system.

Second up is the story about Horizon Data Center Solution’s new expansion. The tidbit here is not that they’re expanding, but that they’re measuring the facility in megawatts worth of space, not square feet worth of space. Physical space in the data center is becoming a very secondary measurement, it’s all about the power now.

Email or call me or visit the SwiftWater Telecom web site for green data center DC power engineering, construction, and operation that saves money without the hype.

Vern

swiftwater telecom rcs cloud computing logo

Data center questions and answers.


In this post, I’m going to answer a few of the various questions I’ve seen show up on the blog in the last week.

Q. Is a data center required to have an EPO (emergency power off) button?

A. No. The EPO button is part of the National Electrical Code, Article 645, covering “information technology equipment” (ITE) rooms. ITE rooms are defined by 645 have a very specific set of characteristics and computer rooms or data center space are not required to be constructed as ITE rooms.

Basically, 645 trades out laxer rules in some areas for stricter rules in others. In exchange for such things as not requiring plenum rated cable under raised floors, the ITE room gets saddled with the EPO button. It’s a bad trade off, the first time someone leans on or maliciously slaps the EPO on the way through the door and shuts the entire data center down. Do yourself a favor, run the cables overhead, dispense with the raised floor, and kick the EPO to the curb.

Email me to find out how green DC data center power can also keep everyone safe without the need for an EPO!

Q. Are batteries a hazard in the data center?

A. Flooded cell batteries should never be in the same space as the data center IT equipment, due to the potential of acid spill and explosive hydrogen gas generation. “Sealed” types such as AGM and VRLA are fine for close proximity to IT equipment as long as their terminals are protected from shorts. Also note that AGM types are extra safe, since they have no free liquid electrolyte in them, they won’t leak acid even if you physically crack the case.

Call or email me or visit the SwiftWater Telecom web site for green data center and cloud computing services minus the hype.

Vern

swiftwater telecom rcs cloud computing logo

More green data center DC power myths.


I’ve just been reading about the value of DC power in the data center. in this post, I’m going to correct some of the strange myths and egregious errors that are spread about DC power.

“But the downside is that DC power can require much larger wires to carry the current, thus creating power buildup and arcing that can endanger IT equipment and staff.”

Where do I start on this one. Power does not “build up” in the large conductors of DC power systems, any more than power “builds up” at your unused AC outlet. Not using the DC power doesn’t mean it’s going to build up, over flow, and arc. Mark this one as completely ridiculous.

“.. added that moderately high-voltage DC power poses some safety concerns, where the power can build up and arc.”

Once again, DC power doesn’t “build up”. Any high voltage distribution system that’s inadequately insulated has the potential to do this whether it’s AC or DC. I’ve seen catastrophic failures in common 240VAC and 480VAC circuits that resulted in melted bus bars, holes burned in armored cable, and total destruction of equipment (look at the Fisher Plaza fire where they destroyed 4000A aluminum bus bars).

What does affect the tendency to arc is jacking the voltage up and that’s just the same whether it’s AC or DC, there’s no great mystery to that. There’s a tendency to want to use high voltages to reduce wire size (higher voltage=less current=smaller wire). I myself much prefer to use long time industry standard 48VDC power, which is touch safe and has an extremely small arc over distance.

“”Four-hundred volts DC may be more dangerous than 400-volts AC,” he said.”

In a properly constructed DC power system, there’s no appreciable difference in safety between AC and DC, expect for some possible difference in coming in contact with the energized conductors. This point is really moot since you do NOT want to be coming in contact with either live 400VDC or 400VAC. Either way, someone is getting hurt.

Is there any safety hazard in the DC power plant? Yes, the battery string (exactly the same hazard that a battery equipped AC UPS has as well). Short the unfused battery leads or drop an uninsulated tool into the bus bars and you’re going to be missing some wire or a tool, completely. Batteries can put out 9,000+ amps of current in a fraction of a second in a short condition.

Finally, there’s the efficiency issue. Comparing a DC power plant to a conventional double conversion (or “online”) UPS is not in doubt. The UPS systems that fare better against DC are the type with “eco mode” (the old “standby” UPS). In normal operation, the eco mode UPS powers the load directly from the AC power, so it doesn’t use the power hogging inverter. Of course, this means the data center equipment has to stand the switch over to battery and the efficiency looks just as bad as a double conversion UPS when it’s running from battery.

Call or email me or visit the SwiftWater Telecom web site today and we’ll design a green DC power plant for your data center that will be SAFE and EFFICIENT!

Vern

swiftwater telecom rcs cloud computing logo