Tag Archives: cooling

Monday data center tidbits.


First up on the list today is a story about the University of Illinois building a new supercomputer data center with no power backup. Apparently they consider their power grid to be super reliable so they don’t need power backup. Good luck with that one.

Next up is the story that the EPA has announced the finding that green house gases endanger the public health. If you’re not actively moving to get your data center greened, you better get your keister in gear. Oh, and if you’re still running a massive water sucking cooling system, better look for a replacement, the EPA points to serious upcoming water availability problems too.

Vern, SwiftWater Telecom

Aurora RCS virtual cloud computing

Advertisements

Evaluating the green data center.


This morning I was reading about evaluating data centers. What struck me about this article was how out of date some of the criteria people are applying is.

The first point is that a data center MUST have raised flooring and the prospective customer must know how deep it is. If you come to one of my data center facilities, you’ll be disappointed to see that there’s no raised flooring at all. All power and data cabling are carried overhead via cable racks and dropped to the cabinets. From an aesthetic standpoint, the cable racks are not as “neat” as the raised floor, since they expose the cabling. A large installation can also look a bit like a jungle gym (I’ve seen facilities with 6 or more levels of cable rack carrying different types of service). On the other hand, eliminating the raised floor reduces the unproductive load on the floor underneath, eliminates the need for underfloor fire detection and suppression, and makes cabling far easier (think about pulling dozens of tiles).

A simple walk through can certainly reveal a lot more about cooling problems than giving the 3rd degree about the exact equipment, cooling technique, and capacity. The hot and cold spots from lousy airflow control are all too easy to spot just by walking through them. It doesn’t matter if the cooling is legacy HVAC or modern green free air cooling, it matters that it’s done right and produces the expected results.

Total power and cooling capacity are irrelevant as long as the data center provider isn’t exceeding them. If there’s a known need for expansion capacity, then the discussion has to be had with the data center provider about reserving adequate capacity for expansion. Available power and cooling capacity can change radically from one day to the next in busy facilities. The important thing is that the provider knows exactly where they are on capacity. A provider who doesn’t have a handle on their cooling capacity is likely to blow right by it with the predictable catastrophe.

Insistance on the facility’s generators being internal to the building is questionable. While this increases security, it also places flammable fuels inside the building, not the best place. In my experience, most large facilities and most conversion facilities will have their generators in an outside secured area. Expectations of seeing a UPS in every cabinet or even massive whole facility UPS cabinets may be disappointed by modern green DC power plants.

These are just a few of my takes on the discussion. It’s important to remember that data center design is taking radical leaps now with the implementation of energy saving green techniques. If you go in expecting to see yesterday’s equipment in the data center, you’ll get yesterday’s data center.

Vern, SwiftWater Telecom

server co-location services

The questions good data centers want you to ask.


I’ve been reading tonight about questions data centers don’t want you to ask. I’m going to expand on some of the questions and generalizations made about data center space in that article.

The article makes certain assumptions about mixed use and conversion buildings for data center housing. If you filter your choice of data centers based on that general criteria, you’re liable to be disappointed both ways, by missing a great opportunity in a perfectly fine building or being disappointed by a substandard conversion building. I’ve seen fairly modern conversion buildings with horrifically low floor ratings and quite old multi use buildings with floor load ratings approaching 1000 lbs sq ft (considering that most people are looking at around 250 lbs per sq ft, that’s fantastic).

The second point is not to make assumptions about the quality of the electrical infrastructure based on its age. Relatively modern buildings with a mostly aluminum electrical infrastructure should be avoided, in my opinion. Aging aluminum bus bars are a disaster waiting to happen (just observe the results of the Fisher Plaza data center fire, caused by aluminum bus bars) and unmaintained aluminum cables mean power quality problems from loose connections everywhere. On the other hand, older all copper systems rarely ever experience the issues that aluminum does, since copper connections simply don’t loosen, nor does copper oxidize the way aluminum does. The system conductor material and capacity is far more important than the age of it.

The idea that conversion buildings are often in areas prone to flooding baffles me. We’ve seen purpose built data centers (Vodafone in Turkey) flood as well as multi use buildings in large cities flood over the last years. Anyone that converts a building in a flood prone area, sets up shop in a multi use building in a flood prone area, or builds a dedicated data center building in a flood prone area exhibits the exact same questionable judgment. No one type of building is more prone to flood than another.

Being in any sort of an area with a high crime problem is also another issue of questionable judgment. On the positive side, data center buildings located in industrial areas typically have access to huge amounts of power, heavy transportation facilities (such as rail), and no zoning issues that would limit most data center activities.

So, how do you pick a good facility? Pick one with solid, well maintained , and up to code electrical (and ditch the known disaster magnet known as aluminum). Pick one with generous floor load capacity. Pick one that can adapt to meet your power and cooling requirements. Pick one that isn’t on a flood plain.

And forget prejudging based on multi user, conversion, and dedicated. Any of the three could meet your needs (or not as the case may be). There’s no shortcut, but you’ll be happier in the end.

Vern, SwiftWater Telecom

data center facilities engineering

Tuesday data center tidbits.


This morning I was reading about IBM working on chip level water cooling where the water actually flows through channels in the silicon. I can just see trying to diagnose leaks INSIDE the chip body (“Hello, Roto Rooter?”)

I just happened across a poll asking if anyone had had failures in their data center from corrosion and if they were using a corrosive gas monitoring system. If you have enough corrosive gas in your data center to cause equipment failures, you better be sending your techs in in hazmat suits,

Vern, SwiftWater Telecom

data center facility engineering

Oversimplifying the data center ….


Today I happened to wander across this article on simplifying for a happier data center. It’s a nice sentiment but I think it misses the correct definition of simplify.

Once you get past the new age happy babble, the gist of the article is to simplify by refusing “features”. While it is true sometimes that adding features badly leads to headaches from buggy results, rejecting new features that may make life easier in the name of simplicity is a non-starter.

So, what is the secret to simplifying the data center? Simple, identify which items are most likely to be a headache and eliminate them or use alternatives. A highly complex system that never ever gives a problem is not the problem. What do I classify under this type of heading? Any mechanical infrastructure systems (chillers, legacy HVAC, generators, etc), any electrical infrastructure that has a potential to catastrophically fail or where a catastrophic failure can turn a failure of a single redundant system into a single point of failure for the entire facility (consider the recent rash of transformer explosions and fires in data centers).

So, how do we simplify these things? Try augmenting legacy cooling with free air cooling (or moving to free air cooling completely). This will reduce the chances of a service impacting cooling failure dramatically, as well as reducing maintenance requirements to virtually nil. For generators, consider alternative green power sources or long run time static back up systems. My rule of thumb is, any time you introduce an internal combustion engine into the picture, reliability goes down and headaches go up (just ask IBM and Air New Zealand).

As I’ve written before in previous posts, keep oil filled transformers outside and away from the facility. If you are not operating your own local generation, it’s hard to avoid these, so know in advance they ARE going to be a headache sooner or later and act to minimize it.

Of course, from the internal data center standpoint, good network design that doesn’t add a lot of superfluous equipment and the most efficient use of servers possible never goes wrong either. Most of this is just good engineering for maximum reliability of the data center systems.

There you go, data center simplicity in a nutshell, and you didn’t even have to climb to a mountaintop guru to get it!

Vern, SwiftWater Telecom

data center facility engineering

It’s all in the (data center) location …


I was reading Site Selection for a Data Center and I thought I’d offer a list of my own tailored for green considerations. I’m going to assume that the site will be a renovated facility and not new construction (it’s a lot easier to apply green techniques when you can design them in from the ground up).

First, infrastructure. What type of power is available? Will the existing power supply the data center equipment without requiring power wasting transformers? Is there adequate power available or can it be expanded without heroic measures? Is there easy access to outside air for free air cooling? Is the building oriented to take advantage of prevailing winds to move air for free air cooling? Can power equipment be located close to the data center equipment (eliminate power wasting long runs of cable)?

Second, the building itself. How much outside window space is there? Are the windows low E type? Low E type windows will allow plenty of natural light while blocking heat that would need to be removed with the data center cooling system. If the windows aren’t low E type, less window space and high efficiency lighting are better than having to remove the heat.

What is the building roof material? What color is it? Black tar or rubber roofs will absorb large amounts of heat and transfer it inside where it has to be removed. Is the roof equipped with structures that enhance free air cooling, such as a clerestory monitor? Is the roof able to support green structures, such as a solar array (generate green power and shield your black roof from solar heating!)? Does the building layout lend itself to using data center waste heat to warm non-data center areas during cold weather?

Finally, there is the outside environment. Is the building surrounded by greenspace? Greenspace helps keep the outside air clean, reducing dust intrusion, as well as keeping the outside environment cooler, helping free air cooling to work as efficiently as possible. Is the building in an area where pollution from other sources may intrude? As an example, one of my former locations was 100 feet away from a busy stop light. Every year, large amounts of soot from idling diesel trucks would have to be washed off the building.

This isn’t an exhaustive list of site considerations of course, but I believe these things are all important to consider to pick the greenest data center site possible!

Vern, SwiftWater Telecom

data center, web hosting, Internet engineering

Greening the data center: Out with the old …


This evening I’ve been reading a blog article about The Planet running tower cases in their data centers. I can’t see for the life of me how this makes sense.

It’s certainly true that tower case setups offer flexibility. There’s space for pretty much whatever add in cards you could want and plenty of room for ridiculous amounts of drives. That there’s more room for air flow inside and it’s easier to get the heat out of them with lousy cooling is not in doubt.

On the other hand, with 1TB drives common and inexpensive, is there really a need for a dozen drive bays? Especially since the trend is well away from massive amounts of server attached storage in the data center? Not to mention the amount of power low utilized drives waste. Not very green at all.

Since even the most compact of 1U server configurations can be had with almost everything desirable for ports, controllers, and video, is there really the need for major amounts of slots anymore? It seems to me that most of the upgrades that would be put in such a system would be absolutely useless in a server (who needs a gamer video card in a co-located server?).

What is a point is that there is a massive difference in space consumed between towers and rackmount cases. The Planet seems to think that’s good, since low density means less heat and less power. Unfortunately, it also means less revenue for the facility. Operating inefficiently because we don’t want to bother with a good cooling design is a lousy tradeoff.

The biggest nail in the coffin to towers in the data center is, how do you control cooling air flow? Towers on open racks would be virtually impossible to separate the hot exhaust air from the cold intake air. It’s the nightmare of anyone who cares in the slightest about greening data centers.

There was some economic justification to doing this 10-15 years ago. It’s 2009, time to relegate long obsolete data center design to ancient history.

Vern, SwiftWater Telecom

data center, web hosting, Internet engineering