Tag Archives: data center efficiency

Finally, proof positive that PUE is garbage.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I’ve just been reading a piece about Microsoft removing fans from their data center servers and that having a negative effect on their PUE numbers. I’ve written on this blog before about the problems with PUE, now we have proof that it needs to be put out of it’s misery.

In a nutshell, PUE is the ratio of power consumed by the IT equipment of the data center, versus the entire power consumed by the data center. A PUE of 1.0 would indicate a data center where all the power is being consumed by the IT equipment. A PUE greater that 1.0 indicates a data center where a certain amount of power is being consumed by other than IT equipment, the biggest chunk of which is cooling.

The problem I’ve written about before with PUE is the failure to take into account the actual work being accomplished by the IT equipment in the data center. Throw in a pile of extra servers just simply turned on and idling, not doing anything useful, and you’ve just gamed your PUE into looking better.

The problem shown here is even more damning. Microsoft determined that data center energy consumption could be reduced by removing the individual cooling fans from its servers and increasing the size of the data center cooling system. Since the increase in power for the data center cooling systems is less than the power required for the individual server fans, the data center accomplishes the same amount of work for less total energy consumption, an efficiency win in anyone’s book.

The side effect of this is that, even though the total energy consumption for the data center is reduced, transferring the energy usage from the fans (part of the IT equipment number) to the cooling (part of the non-IT equipment number) makes the PUE for the data center look WORSE.

Gaming the metric simply made it inaccurate, which was bad enough. Any efficiency metric that shows a net gain in data center efficiency (same amount of work accomplished for less energy consumed) as a NEGATIVE is hopelessly broken. This also has the side effect of making a mockery of the EPA’s Energy Star for data centers, since that award is based directly on the data center’s PUE.

Misleading, inaccurate, and now totally wrong, this sucker needs to go where all the other bad ideas go to die.

SwiftWater Telecom Green Eco Cabinet Filler Panels, insulated, lightweight, inexpensive

A green networking idea for cloud computing and data center.


Vern Burke, SwiftWater Telecom
Biddeford, Maine

After commenting on the piece questioning whether cloud computing really saves energy, I came up with an idea that might simplify the whole energy cost of the network issue. Yup, it’s another metric! 🙂

The issue here is, how do you account for the energy cost of transferring data across the Internet to a cloud computing provider? After all, consider the massive amount of networking devices involved in the Internet and that Internet routing protocols are geared towards bandwidth efficiency rather than energy efficiency.

So, why not create an energy based routing protocol? For the sake of discussion, I’ll call this OGPF (Open Greenest Path First). Unlike OSPF (Open Shortest Path First) which looks for the shortest way from point A to point B using weighted “length” metrics, OGPF could use an energy metric for every device in the path, possibly including how the device is powered.

Now you have a network that’s automatically as green as possible and you have a way to at least get a measurable idea of what’s going on across a network wide link.

6 “real” ways to know it’s time to renovate your data center.


I was just reading this piece about 10 ways to tell that your data center is overdue for renovation. Great idea but, unfortunately, that piece was WAY off track, so I’m going to list my 6 ways here.

1. Cooling

You don’t need a fancy expensive air flow study to get an idea that your data center has cooling issues. A simple walk through will make significant hot or cold spots very obvious. Significant hot or cold spots means it’s time to rework things.

2. Space

If you wait until you can’t cram one more piece of gear in, as the article suggests, you’re going to be in a heap of trouble. Make sure all idle equipment is removed and set a reasonable action limit (such as 75% full) to address the space issue BEFORE you run up against the limit.

3. Power

Contrary to the article, reading equipment load information is NOT a sign that your data center needs to be renovated, it’s just good practice. Nuisance trips of breakers and the need to reroute power circuits from other areas of the data center are a dead giveaway that the original power plan for the data center needs a serious overhaul.

4. Strategy

You can’t create an IT strategy without considering technologies as the article would have you believe. First, inventory and evaluate the existing data center, identify where it’s falling short of meeting business requirements and goals, and then consider the technology to get it back on track. Every step in it’s proper order.

5. Performance

When it becomes apparent that the existing data center infrastructure is going to fail on any of the first four points with anticipated changes coming up, it’s time to retire it. Don’t let the problems occur and THEN fix them.

6. Organization and documentation

If touching any part of the data center is a major crisis because of over complication of the systems and/or inaccurate, incomplete, or just plain missing documentation, it’s a clear signal to get it revamped and under control before it causes a complete disaster.

Thursday data center tidbits: gold plated SSD, more crazy data center metrics


Vern Burke, SwiftWater Telecom
Biddeford, Maine

First up today is the piece about LSI releasing a 300GB SSD solid state drive for almost $12,000. The speed is certainly impressive but is it really worth the price premium between a sub $100 traditional hard drive vs a $12K SSD of the same capacity?

The next piece is about the new data center metrics from The Green Grid, CUE (carbon usage effectiveness) and WUE (water usage effectiveness). Instead of taking the opportunity to create some truly useful and accurate metrics, they’ve created two that suffer from exactly the same faults as PUE (power usage effectiveness), namely that they do not take into account actual work being produced and are ridiculously easy to game. I’ll give The Green Grid credit for swinging, but this is the second and third whiffs in a row.

Finally, we come to more what if cloud computing security hysteria.
From the article comes the quote “What if the service you’re using merges with another company or goes bankrupt?”. Um, exactly the same thing if your co-location provider merges or goes bankrupt. This is NOT a cloud computing specific issue. The “users are uneasy about it” line is NOT valid evidence that there’s anything wrong at all with cloud computing security.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

swiftwater telecom rcs cloud computing logo

Lipstick on a pig: Facebook’s data center refit.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I’ve just been reading an article today about Facebook retrofitting a data center and all the great energy efficiency gains. Unfortunately, sometimes the best retrofit method for a data center is dynamite.

Most of the modifications mentioned have to do with airflow. Now, I’ll be the first one to cheer for improving and controlling airflow for improving data center efficiency. The problem is, how BAD does your airflow situation have to be to have to run the cold air temperature at 51 degrees F?! I though data centers running in the 60s were out of date, 51 is just pathetic. It’s obvious that there was certainly room for improvement here, but the improvement effort only got them to 67 and that’s still lousy.

The big problem here comes from continued reliance on the obsolete raised floor as a plenum design. There are certainly far more reasons not to use raised flooring in a data center, including unproductive floor loading, expense, fire detection and suppression requirements, under floor housekeeping, metal whisker contamination, and a whole host of airflow issues. Since the Facebook retrofit is all about the airflow, I’m going to just address the raised floor airflow issues.

If you’re really serious about balancing your data center airflow, using a raised floor as a plenum is the last thing to do. First, under floor obstructions make smooth airflow next to impossible, even if you’re totally conscientious about housekeeping. Second, there’s zip for fine control of where the air is going. Need to add just a small amount of air here? Sorry, you take multiples of full tiles or nothing. Third, pull a tile to work on underfloor facilities and you immediately unbalance the entire system. Pull a dozen tiles to put in a cable run and you now have complete chaos across the whole floor. Finally, make any changes to your equipment and you have to rebalance the whole thing.

These things are so inefficient that it isn’t any wonder that a lousy design would need ridiculously cold air to make it work. 67 is certainly an improvement, now they’ve gotten things up to being only 5-10 years out of date.

When Facebook actually retrofits a data center all the way up to modern standards, I’ll be impressed. This operation is still a pig underneath, no matter how much lipstick you put on it.

Monday data center tidbits: solar powering the data center, cleaning up after yourself virtually


Vern Burke, SwiftWater Telecom
Biddeford, Maine

First up today is a piece about solar power efficiency and whether it is practical to run a data center from 100% solar (it’s worth noting that all of the “100% solar” data centers that I’ve seen are very tiny). It’s going to take a long time before photovoltaic solar power is efficient enough to run a large load like a data center but any load you can shed is a benefit to the bottom line, not to mention shading the data center roof to cut down cooling costs.

Next up is the piece about virtual machine ghosts in the data center. There’s no substitute for paying attention and keeping control of what you’re doing, this isn’t rocket science. These people would have to do exactly the same thing with physical servers, why is it such an awful strain to do it with virtual ones?

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

swiftwater telecom rcs cloud computing logo

Friday data center tidbits: data centers gone wild, Facebook behind the times


First up today is the piece about the federal government “finding” 1000 more data centers than they thought they had. First of all, how does your data center inventory get so bad that you lose track of 1000 data centers? Second, how in the world do you allow the data center sprawl to get so far out of control? That’s a total of 2100 federal data centers, an average of 42 data centers for every single state. Last but not least, who in the world would think it’s a bad idea to consolidate these?

The federal goverment, data center bozos of the week, the truckloads of red squeaky noses are on their way.

The next piece is about Facebook saving big by retooling its data center cooling. Really, is it big news that not mixing cold intake air with hot exhaust air is a good idea? If Facebook is pushing this as a “look how great we are” point, they’re about 5 years too late.

Finally, here’s a survey about the causes of data center downtime. Not maintaining the data center backup batteries and overloading the backup power are just plain silly, but the telling one for me is 51% accidental human error, including false activation of the EPO (emergency power off). I’ve said it before, the gains that the National Electrical Code allows the data center in exchange for the EPO are NOT worth this kind of failure. EPO=EVIL, period.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo