Tag Archives: Uptime Institute

The green data center, no trade off required for reliability.


Today I was reading about the “trade off between reliability and efficiency” in the data center. I think it’s far from the truth that you have to give up one for the other.

Part of the problem that causes this kind of misconception are obsolete classification systems, such as the Uptime Institute’s tiers (I’ve written before about my problems with that particular classification system). In the example given in the article, the data center operator in question had to maintain 1 to 1 hot standby servers for every operating server to achieve that particular tier rating, as if reliability couldn’t be achieved by anything less than exact duplicates of every piece of gear in the data center. Of course, the 2N approach ignores the possibility, what if you waste all that money and energy running 1 to 1 hot standbys, the primary fails, and then the secondary immediately fails?

Of course, the Uptime Institute’s response to this is to announce yet ANOTHER data center efficiency metric.

This also spotlights the weakness of using PUE as anything but an internal engineering aid. It sounds great that you have a 1.8 PUE but, since PUE doesn’t have any reference to the amount of work being accomplished, you’ll be wasting half the energy consumed on equipment producing no useful work. The cost of operating this way plus the likely upcoming carbon penalties will melt your credit card.

So, how you combine green data center techniques with high reliability? Here’s my recipe for it:

Add a green modular DC power plant with enough modules to provide n+1 (or 2 or 3) capacity. Split AC power feeds for the modules between 2 or more AC sources.

Add 2 parallel battery strings.

Add in 1 cloud computing cluster, such as the excellent Xen Cloud Platform we use. Provision enough cloud hosts for n+1.

Split cloud hosts with multiple DC power feeds.

Add in redundant storage servers.

Add in a load balancing system capable of automatically restarting virtual machines if a cloud host fails.

Add in good site design practices.

(note, this is exactly the way we run)

The fact is that nothing short of a smoking hole disaster is likely to interrupt the service provided by this configuration for longer than the time required to automatically restart a virtual machine. If the time to restart is an issue, specify a backup virtual machine on a different host in the cloud. Protect against the possible data center wide disaster with a duplicate configuration on a cloud in a second data center.

It’s really not that hard to achieve high reliability with green data center techniques (even if Microsoft, Amazon, Google, and Rackspace make it look like it is). Deep six the antiquated thinking and your wallet and the planet will thank you for it.

High reliability services available!

Vern

swiftwater telecom logo

Greening the data center, low hanging fruit.


I’ve just been perusing data center energy savings today. In the real world, some of the difficulty levels of the suggestions need to be re-evaluated.

If your equipment isn’t already set up in a hot aisle/cold aisle configuration, this is an important thing to do. Unfortunately, this can’t qualify as easy. First, this type of change is probably going to require reversal of ever other row of equipment to establish a back to back, front to front configuration. This means downtime for equipment and moving heavy cabinets around or unloading, unbolting, and repositioning open racks. Also factor in recabling and possible repowering. Not a throw and go project at all.

Second, the cooling system is almost certainly going to have to be rearranged to take proper advantage of the new configuration. Raised floor tile moves and/or ductwork changes are the order of business. Once again, not something that should be undertaken lightly.

Third, figure on needing to install some sort of hot aisle containment to really get the benefit of the change. Flexible curtains are fairly easy to implement, rigid containment structures are going to take much more time.

On the idea of raising the chiller and air temperatures, any failure brought on by higher temperatures is not going to be immediate or even short term (unless the temperature is raised dramatically high). You’ll never get anywhere if you wait long enough to see if a one degree change shortens the life span of your servers. Stay within the ASHRAE recommended temperatures and skip babying the equipment through one degree increments.

So what are the easy things to do to help your data center get greener? Replace T12 florescent bulbs with T8 ones, operate the data center temperatures at current ASHRAE levels or higher if you determine it’s appropriate for the equipment, power down zombie servers and networking equipment that is no longer being used, and consolidate services to better utilize only the servers really needed.

Oh, and forget concentrating on PUE. The article said it all when it said “Just remember: while upgrading to more energy-efficient units and applying virtualization techniques will cut your overall energy consumption, it will not reduce your PUE and perhaps will even raise it.”. Totally useless. Arrange servers and workloads efficiently, use cooling efficiently and size it right for the heat load of the servers, and the efficiency of the data center will fall into place.

There now, that wasn’t so hard, was it?

Vern, SwiftWater Telecom

data center facility engineering

Greening the data center: Dry your tiers …


Today’s commentary comes from reading Digital Realty, Uptime Debate Tier System. I believe that the tier designation system is not only irrelevant now but also holds us back.

I don’t believe there’s ever been as misused a thing as the Uptime Institute’s tier designation system for data centers. What started as an easy shorthand to compare data center reliability has been routinely stretched totally out of shape as a tool of the sales and marketing wars. The sole aim was to be able to claim a higher tier than the competition.

One of the major problems in this is, how do you classify a data center that implements most of the characteristics of one tier but may also implement features of the next tier above? This is where you find data centers classifying themselves as things like “Tier 2.5”. This looks better marketing wise than “Tier 3” but what does it really mean? Who the heck knows!

By far the worst problem is that the tier definitions no longer fit many of the technologies and design philosophies of today. Design to the tier definitions and you”ll get an obsolete data center right from the start. It’s pretty much a sure bet that data center built with current, state of the art technologies and techniques would far out perform such a dinosaur, despite looking bad by tier definition. Things aren’t just apples to apples anymore.

So, what is the answer to this? There is no shortcut to evaluating and comparing data center reliability. With the explosion of green techniques and technologies, there are far to many variables to prune things down to a tiny set of outdated and overly broad characteristics. There simply is no substitute for research and consideration whether the data center meets the customer’s actual need and how it will meet those needs in real life compared to others.

It’s time to relegate the tiers system to history and move ahead.

Vern, SwiftWater Telecom
data center, web hosting, Internet engineering