Tag Archives: data center cooling

6 “real” ways to know it’s time to renovate your data center.


I was just reading this piece about 10 ways to tell that your data center is overdue for renovation. Great idea but, unfortunately, that piece was WAY off track, so I’m going to list my 6 ways here.

1. Cooling

You don’t need a fancy expensive air flow study to get an idea that your data center has cooling issues. A simple walk through will make significant hot or cold spots very obvious. Significant hot or cold spots means it’s time to rework things.

2. Space

If you wait until you can’t cram one more piece of gear in, as the article suggests, you’re going to be in a heap of trouble. Make sure all idle equipment is removed and set a reasonable action limit (such as 75% full) to address the space issue BEFORE you run up against the limit.

3. Power

Contrary to the article, reading equipment load information is NOT a sign that your data center needs to be renovated, it’s just good practice. Nuisance trips of breakers and the need to reroute power circuits from other areas of the data center are a dead giveaway that the original power plan for the data center needs a serious overhaul.

4. Strategy

You can’t create an IT strategy without considering technologies as the article would have you believe. First, inventory and evaluate the existing data center, identify where it’s falling short of meeting business requirements and goals, and then consider the technology to get it back on track. Every step in it’s proper order.

5. Performance

When it becomes apparent that the existing data center infrastructure is going to fail on any of the first four points with anticipated changes coming up, it’s time to retire it. Don’t let the problems occur and THEN fix them.

6. Organization and documentation

If touching any part of the data center is a major crisis because of over complication of the systems and/or inaccurate, incomplete, or just plain missing documentation, it’s a clear signal to get it revamped and under control before it causes a complete disaster.

Advertisements

Lipstick on a pig: Facebook’s data center refit.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I’ve just been reading an article today about Facebook retrofitting a data center and all the great energy efficiency gains. Unfortunately, sometimes the best retrofit method for a data center is dynamite.

Most of the modifications mentioned have to do with airflow. Now, I’ll be the first one to cheer for improving and controlling airflow for improving data center efficiency. The problem is, how BAD does your airflow situation have to be to have to run the cold air temperature at 51 degrees F?! I though data centers running in the 60s were out of date, 51 is just pathetic. It’s obvious that there was certainly room for improvement here, but the improvement effort only got them to 67 and that’s still lousy.

The big problem here comes from continued reliance on the obsolete raised floor as a plenum design. There are certainly far more reasons not to use raised flooring in a data center, including unproductive floor loading, expense, fire detection and suppression requirements, under floor housekeeping, metal whisker contamination, and a whole host of airflow issues. Since the Facebook retrofit is all about the airflow, I’m going to just address the raised floor airflow issues.

If you’re really serious about balancing your data center airflow, using a raised floor as a plenum is the last thing to do. First, under floor obstructions make smooth airflow next to impossible, even if you’re totally conscientious about housekeeping. Second, there’s zip for fine control of where the air is going. Need to add just a small amount of air here? Sorry, you take multiples of full tiles or nothing. Third, pull a tile to work on underfloor facilities and you immediately unbalance the entire system. Pull a dozen tiles to put in a cable run and you now have complete chaos across the whole floor. Finally, make any changes to your equipment and you have to rebalance the whole thing.

These things are so inefficient that it isn’t any wonder that a lousy design would need ridiculously cold air to make it work. 67 is certainly an improvement, now they’ve gotten things up to being only 5-10 years out of date.

When Facebook actually retrofits a data center all the way up to modern standards, I’ll be impressed. This operation is still a pig underneath, no matter how much lipstick you put on it.

Building out the data center the right way.


Tonight I’ve been reading an article about data center building trends. There’s some very good points to this and also some things that I think are very wrong. These also explain some things that have mystified me for some time.

Looking ahead 5 years for capacity planning isn’t a bad idea (except that the data center needs to be flexible enough to accommodate the changes that can happen in 5 years), but the whole decision on build out or not for data center infrastructure in advance hinges on the penalty for doing so. In short, there’s no penalty for building out passive infrastructure and a big penalty for building out active infrastructure.

I’ve been mystified by the idea that data center PUE (power usage effectiveness) only gets good when a data center is full. Now I understand, this is based on the idea of a data center building out (and operating) 100% of it’s cooling infrastructure in advance. If you’re only running 20% of your 5 year forecasted server capacity but you have to run 100% of your 5 year forecasted cooling capacity because it’s a monolithic system that’s either on or off, of course your efficiency is going to stink!

The PUE numbers for that kind of arrangement would be pathetic. Of course, as you add servers with the same amount of cooling power usage, the PUE would gradually get better, but who in the world would really WANT to run something that way? (Reference last year’s story about UPS cutting their data center power by switching off 30 out of 65 HVAC units!)

Leaving plenty of extra room for the transformer yard and the generator yard is a great idea (you can’t expand them once you get things built in around them). On the other hand, it would be silly to build out and power up a yard full of transformers that were sitting there doing nothing except chewing up power.

So, what sort of data center infrastructure things can safely be built out far ahead of the actual need? The physical building is a bit of a no-brainer, as long as the active sections can be isolated from the idle (you don’t need to be cooling an entire hall that’s only 10% full).

Passive electrical is another good one. This means entrances, disconnects, breaker panels, distribution cabling, transfer switches. No UPS, no DC power plants unless you’re going to be able to shut them off and leave them off until you really need them.

Passive cooling infrastructure such as ducts. Take a lesson from UPS, do NOT build out double the HVAC you need and run it all!

Finally, build out the support structures for the transformer and generator yards. Mounting pads, conduit, cabling, so the equipment is all set to drop, hook up, and go.

Don’t kill your data center’s efficiency in return for capacity 5 years from now.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Monday data center tidbits: generator smog and misusing PUE (again)


First up is the piece about Quincy WA dithering about the environmental impact of data center backup generators. I’ve got news for someone, if you’re running the generators enough to cause a blip in the health of the local area, you probably should be considering a different location because that means the commercial power is WAY bad.

Next piece is about Digital Realty talking about PUE and efficiency. The comment of note here is that they determined the initial PUE from the load bank testing. How in heck could you get a meaningful PUE from load bank testing? Sure, you’re consuming power, generating heat, and removing heat, but the airflow in the data center is not even going to be close to the real IT equipment environment. The fact that that wasn’t close to accurate shouldn’t have come as any surprise.

Finally, from the piece above, we also get the piece of wisdom that you only get good PUE from fully loaded data centers. My own data center, currently loaded at 10% (anyone looking for data center capacity?) averages 1.2 and, with the kick off of fall, we’re reusing 100% of the waste heat to heat the rest of the building.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Monday data center tidbits: #cloudcomputing liability, crippling servers to cool


First up today is the piece about cloud computing provider liability limits. The real issue here isn’t about who accepts responsibility for what, it’s paying for the level of liability the customer wants. It seems to me a combination of customers bottom feeding for rock bottom prices and then expecting the provider to take on a liability commitment that could easily bankrupt the provider. Low cost, ultra high expectation sounds like a nightmare.

Next up is a piece about comparing colocation to cloud computing. The key here is the potential customer’s server utilization. Without cloud computing, I think it’s very difficult to achieve high utilization of standalone server hardware.

Last up is the story about scaling back server performance to reduce heating load during cooling failures. Aside from the issue of a data center that has repetitive cooling failures and obviously not enough redundancy, I can’t see this as being as effective as powering down some amount of capacity.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Tuesday data center tidbits: Microsoft eurekas, #cloudcomputing thought reversal


Wow! I don’t believe it! The uber geniuses at Microsoft have just announced the “non-intuitive” discovery that painting a data center’s roof a color that doesn’t absorb heat from the sun reduces cooling requirements! Sit tight boys, your Nobel Prizes are on the way!

Yesterday, I did a post on adjusting workloads to fit cloud computing rather than just deciding they weren’t appropriate for clouding. Check out this piece for another look at the reversal of thinking that allows almost anything to be clouded efficiently.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

Data center DC power, power backup runtime, and free air cooling, a greener shade of green.


I’ve written previously here about the green, energy saving benefits of DC power in the data center, the reliability follies of super short run time power backup, and, of course, the well recognized benefits of free air cooling. In this post, I’m going to discuss making the best green use of all three of these together in the data center.

The “classic” data center power backup system is the double conversion UPS. In this scenario, commercial AC power is rectified to DC for the backup batteries and then inverted back to AC to supply the data center equipment. This configuration actually has three points of efficiency loss, the rectifiers for the AC to DC, the inverters for DC to AC, and the load power supplies for AC to DC again. The data center DC power plant does away with 2/3 of the efficiency loss by eliminating the DC to AC inverter and the AC to DC power supply in the load equipment.

The second part of this equation is the backup power itself. The trend to incredibly short run time backup power (such as flywheels with only 15 seconds of run time) is a foolish gamble that fallible generators are going to work perfectly every time. If there’s even a small issue that could easily be dealt with, there simply is no time and the facility is going down hard.

The third part is the free air cooling. It really goes without saying that using cooler outside air for cooling is far more efficient than any type of traditional data center air cooling.

So, how do these three things tie together to make the data center even greener than any one separately? Many data centers use load shifting to save power costs (such as freezing water at night when power is cheaper to cool with during the day). I call this technique heat shifting.

My data center is equipped with an 800A 48VDC modular power plant equipped N+1, a battery string capable of 8 hours of run time, and free air cooling. The idea is to simply pick the hottest part of the day (usually early afternoon) and remove heat load from the free air cooling by shutting down the rectifiers and running the facility from the battery string for 2 hours.

This shifts that part of the heat load of the data center to times when the free air cooling is operating more efficiently, allowing the free air cooling the elbow room to support more productive equipment load. Additionally, you have the side effect of exercising the batteries regularly, avoiding the ills that can plague idle batteries, such as stratification and sulfation.

As if there weren’t already enough great reasons to use green DC power, long run backup, and free air cooling in the data center, here’s another one.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo