Tag Archives: data center facilities

Friday data center tidbits: data centers gone wild, Facebook behind the times

First up today is the piece about the federal government “finding” 1000 more data centers than they thought they had. First of all, how does your data center inventory get so bad that you lose track of 1000 data centers? Second, how in the world do you allow the data center sprawl to get so far out of control? That’s a total of 2100 federal data centers, an average of 42 data centers for every single state. Last but not least, who in the world would think it’s a bad idea to consolidate these?

The federal goverment, data center bozos of the week, the truckloads of red squeaky noses are on their way.

The next piece is about Facebook saving big by retooling its data center cooling. Really, is it big news that not mixing cold intake air with hot exhaust air is a good idea? If Facebook is pushing this as a “look how great we are” point, they’re about 5 years too late.

Finally, here’s a survey about the causes of data center downtime. Not maintaining the data center backup batteries and overloading the backup power are just plain silly, but the telling one for me is 51% accidental human error, including false activation of the EPO (emergency power off). I’ve said it before, the gains that the National Electrical Code allows the data center in exchange for the EPO are NOT worth this kind of failure. EPO=EVIL, period.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.


swiftwater telecom rcs cloud computing logo


Building out the data center the right way.

Tonight I’ve been reading an article about data center building trends. There’s some very good points to this and also some things that I think are very wrong. These also explain some things that have mystified me for some time.

Looking ahead 5 years for capacity planning isn’t a bad idea (except that the data center needs to be flexible enough to accommodate the changes that can happen in 5 years), but the whole decision on build out or not for data center infrastructure in advance hinges on the penalty for doing so. In short, there’s no penalty for building out passive infrastructure and a big penalty for building out active infrastructure.

I’ve been mystified by the idea that data center PUE (power usage effectiveness) only gets good when a data center is full. Now I understand, this is based on the idea of a data center building out (and operating) 100% of it’s cooling infrastructure in advance. If you’re only running 20% of your 5 year forecasted server capacity but you have to run 100% of your 5 year forecasted cooling capacity because it’s a monolithic system that’s either on or off, of course your efficiency is going to stink!

The PUE numbers for that kind of arrangement would be pathetic. Of course, as you add servers with the same amount of cooling power usage, the PUE would gradually get better, but who in the world would really WANT to run something that way? (Reference last year’s story about UPS cutting their data center power by switching off 30 out of 65 HVAC units!)

Leaving plenty of extra room for the transformer yard and the generator yard is a great idea (you can’t expand them once you get things built in around them). On the other hand, it would be silly to build out and power up a yard full of transformers that were sitting there doing nothing except chewing up power.

So, what sort of data center infrastructure things can safely be built out far ahead of the actual need? The physical building is a bit of a no-brainer, as long as the active sections can be isolated from the idle (you don’t need to be cooling an entire hall that’s only 10% full).

Passive electrical is another good one. This means entrances, disconnects, breaker panels, distribution cabling, transfer switches. No UPS, no DC power plants unless you’re going to be able to shut them off and leave them off until you really need them.

Passive cooling infrastructure such as ducts. Take a lesson from UPS, do NOT build out double the HVAC you need and run it all!

Finally, build out the support structures for the transformer and generator yards. Mounting pads, conduit, cabling, so the equipment is all set to drop, hook up, and go.

Don’t kill your data center’s efficiency in return for capacity 5 years from now.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.


swiftwater telecom rcs cloud computing logo

Tuesday data center tidbits:unclear on the concept, micro PUE jumps the shark

First up today is this piece about the top 10 data center annoyances. Now, I can certainly agree with a complaint about bad lighting levels and lousy housekeeping, but crabbing because you’re uncomfortable working in the hot aisle misses the point. The hot aisle works just fine for the servers and that’s why it’s there, not for your comfort, so suck it up.

Data center quote of the week, from this article on “micro PUE”:

“Inventing the term ‘Micro PUE’ seems to me to be an extreme case of riding the PUE bandwagon a little too far …”

More like a LOT too far.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.


swiftwater telecom rcs cloud computing logo

Show and tell: Pictures from the #datacenter

I thought I’d do something a little different today and put up some pictures and info about our green data center and cloud computing infrastructure. Enjoy!

Data center specs:

Power: 48VDC 800A(primary), 130VDC (available), 208Y120VAC (available)
Power density: 24KW per cabinet
Cooling: direct free air cooling with conventional backup
Power backup: multiple battery strings with greater than 8 hour run time
Cabinet type: vertical chimney
Lighting: T8 fluorescent with electronic ballast

An average set of cabinets:

(These are Rittal vertical chimney cabinets.)

The heart of the energy efficient 800A 48VDC power plant:

A 48VDC distribution panel:

(This is a redundant A-B dual feed panel that feeds subpanels in individual cabinets.)

A Peco II 48VDC inverter:

(The inverter allows us to run the small amount of equipment that’s only available with AC power from the same 48VDC power plant.)

Cloud computing servers:

(These are an example of the servers supporting our cloud computing operations. These are 2x AMD Opteron, 1U, 1/2 length case.)

Email or call me or visit the SwiftWater Telecom web site for green data center services today.


swiftwater telecom rcs cloud computing logo

The top data center operations mistake.

I was just reading about the top 10 data center operations mistakes and I noticed they completely missed by far the most important one of all.

It’s certainly true that lack of proper procedures, training to properly follow the procedures, and training to make sound decisions when the procedure isn’t up to the situation are all important to smooth data center operations. The military wisdom that no plan of battle ever survives contact with the enemy frequently comes into play. The most beautifully crafted and approved procedures don’t mean a thing when it comes to an unanticipated situation and nobody involved can make a smart decision on their own.

The biggest mistake by far, in my opinion, and the one we’ve seen the most examples of in the last several years of data center failures, is failure to analyze risks sensibly. A large percentage of these outages have been the direct result of high risk, zero gain actions.

One good example of this is the irresistible urge to mess around inside of live electrical equipment. There is little to nothing you can do in a data center that is more dangerous and higher risk than working on exposed high voltage power, both to the health and safety of the person doing the working and the operation of the data center itself. The result of screwing this up can be spectacularly catastrophic (witness the transformer explosion at The Planet’s data center in 2009 due to a miswired added high voltage circuit).

Given the hazard, you would think that data centers would reserve that for only the most extreme need, however, many of those failures have been for the most trivial of purposes. One of the bad ones from this year involved a “maintenance” operation to manually check phase rotation in a panel (the correct A-B-C ordering of 3 phase power). Since this phase rotation is pertinent only to 3 phase electric motors, this is ZERO issue with data center IT equipment and far from justifiable, given the risk.

It comes down to a healthy dose of common sense. If you’re going to take all of your data center generator capacity offline to do something trivial and you only have 15 second run time flywheel UPS systems, that’s probably a BAD choice. If you know you restore generator capacity in far less time than the run time of your UPS, that makes a lot more sense.

Don’t do risky things for no gain and you’ll avoid the nightmares of the most preventable data center operations mistakes. It’s just that easy.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.


swiftwater telecom rcs cloud computing logo

Extreme weather and the data center.

I’ve been sitting here this evening operating the data center under extreme weather protocols due to wild electrical storms and tornado warnings. I thought I’d take a few minutes and discuss how to protect a data center during extreme weather events.

Whether you subscribe to the idea of global warming or not, it’s apparent that this has already been a bumper year for violent weather. High winds, lightning, heavy rain, none of it is very conducive to keeping the data center up and operating. Obviously, being able to shut down is the best protection (this is where cloud computing really shines, the capability of moving services out of harm’s way), but what do you do when you can’t just shut it all down?

Here’s my weather protocol for tonight:

1. Identify critical services and the capacity needed to minimally run them. In this case, I was able to substantially reduce data center power load by shutting down redundant services and shutting down cloud computing capacity that wasn’t required to keep the critical services operating. Remember, reduced power load means extended run time on the backup power.

2. Transfer workloads to an alternate data center.

3. Reduce cooling capacity to reflect the lower data center power load (less load, more run time!). Insure that there is no water or wind infiltration via cooling system intake vents. In my case, I change the free air cooling system to passive intake to avoid blowing in water.

4. Secure all windows and doors against high winds. If an area can’t be reasonably secured, such as an area with large, vulnerable, plate glass windows, secure inner doors to isolate the vulnerable area.

5. Reduce power equipment capacity equivalent to power load reduction. Open breakers or unjack equipment to isolate it from any damage from extreme power events, such as a close lightning hit on the AC commercial power.

6. Make sure that emergency supplies and emergency lighting are all up to par.

7. Know what to grab and take and how to secure the data center in case the situation is bad enough to require abandoning the data center.

My previous post on dealing with a data center flood also applies to this as well.

Follow these protocols or use them as a starting point for your own and you’ll find that your data center can make it through almost anything Mother Nature can throw at you intact.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.


swiftwater telecom rcs cloud computing logo

Wednesday data center tidbits: no power backup in the #datacenter?

First up today is about the idea of building a data center with no power backup at all. This is about as boneheaded a thing as I’ve ever seen. Does it REALLY pay you to not only duplicate but run extra infrastructure so you can save a bit in equipment costs by letting a data center fail? What about the cost of restoring all the downed equipment? Or the damage to equipment from wild power fluctuations that a battery backed system (such as our 48V DC power plant in our data center) would absorb? Squeaky red noses to Yahoo on this one.

Next up is a piece about improving data center airflow. What caught my eye was this, “…flowing air from the CRAC unit, through specialized floor tiles and into the server fans…”. Attempting to tout cooling efficiency with a hopelessly obsolete raised floor is an automatic FAIL.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.