Category Archives: data center genera

Finally, proof positive that PUE is garbage.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I’ve just been reading a piece about Microsoft removing fans from their data center servers and that having a negative effect on their PUE numbers. I’ve written on this blog before about the problems with PUE, now we have proof that it needs to be put out of it’s misery.

In a nutshell, PUE is the ratio of power consumed by the IT equipment of the data center, versus the entire power consumed by the data center. A PUE of 1.0 would indicate a data center where all the power is being consumed by the IT equipment. A PUE greater that 1.0 indicates a data center where a certain amount of power is being consumed by other than IT equipment, the biggest chunk of which is cooling.

The problem I’ve written about before with PUE is the failure to take into account the actual work being accomplished by the IT equipment in the data center. Throw in a pile of extra servers just simply turned on and idling, not doing anything useful, and you’ve just gamed your PUE into looking better.

The problem shown here is even more damning. Microsoft determined that data center energy consumption could be reduced by removing the individual cooling fans from its servers and increasing the size of the data center cooling system. Since the increase in power for the data center cooling systems is less than the power required for the individual server fans, the data center accomplishes the same amount of work for less total energy consumption, an efficiency win in anyone’s book.

The side effect of this is that, even though the total energy consumption for the data center is reduced, transferring the energy usage from the fans (part of the IT equipment number) to the cooling (part of the non-IT equipment number) makes the PUE for the data center look WORSE.

Gaming the metric simply made it inaccurate, which was bad enough. Any efficiency metric that shows a net gain in data center efficiency (same amount of work accomplished for less energy consumed) as a NEGATIVE is hopelessly broken. This also has the side effect of making a mockery of the EPA’s Energy Star for data centers, since that award is based directly on the data center’s PUE.

Misleading, inaccurate, and now totally wrong, this sucker needs to go where all the other bad ideas go to die.

SwiftWater Telecom Green Eco Cabinet Filler Panels, insulated, lightweight, inexpensive

How to improve data center efficiency with Green Eco Filler Panels.


Vern Burke, SwiftWater Telecom
Biddeford Maine

Are you killing your data center’s efficiency by leaving empty spaces in your cabinets open? Allowing the cooling air for your servers to mix with the hot exhaust air wastes energy, pollutes the environment, and hurts your pocketbook!

The SwiftWater Telecom Green Eco Filler Panel eliminates the shortcomings of tradition filler panels. Now there’s no reason to leave your data center server cabinets leaking air and leaking money!

Lightweight: Traditional metal filler panels are heavy. Green Eco Filler Panels are feather light. Less wasted weight in the server cabinet, less weight (and cost!) to ship, easier to handle.

Easy to install: Traditional filler panels require 4 screws or more for every filler panel. Green Eco Filler Panels only require 2 screws for each upper, lower, or splice bracket. Fillers up to 22U only require 6 screws!

Easy to use: Traditional filler panels require unscrewing to remove them. Green Eco Filler Panels can be easily snapped in and out of the brackets in seconds with no tools required at all!

Superior air seal: Traditional filler panels require a gap between them to allow for adjustment, leaving air leaks. Green Eco Filler Panels allow for adjustment in the rack without sacrificing the air seal!

Flexible sizing: Traditional filler panels are only commonly available in 1U increments. Many pieces of rackmount equipment use 1/2U sizes, leaving gaps the traditional panels can’t fill in. Green Eco Filler Panels are available in 1/2U sizes!

Insulation: Traditional filler panels have no insulating value at all. Green Eco Filler Panels will actually insulate the cold air from the hot!

Lower cost: Traditional filler panels are expensive to buy and ship. Green Eco Filler Panels are inexpensive and their light weight makes shipping a breeze!

It’s time to toss out those leaky, inefficient traditional filler panels and help your bottom line by filling those empty holes in your server cabinets with Green Eco Filler Panels!

Email me for more information, call 207-399-7108, or visit the Green Eco Filler Panels page to order!

Gartner and cloud computing, take 2.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I just got through reading a post by Lydia Leong about the “impurity” of cloud computing defending the Gartner “Magic Quadrant” for “cloud computing IAAS and web hosting”. Where in the world do I start on this.

It’s certainly true that the same customers that want cloud computing services may want classic data center services as well, such as co-location or dedicated physical servers. It’s also probably true that providers that offer a broader range of both classic data center services and cloud computing services may be stronger as a business because of the flexibility offered by having a broader portfolio of services available.

The problem is, what we have here is an attempt to mix a wild collection of things together as being “one market”. How can you lump the suitability of a provider to host a web site with a provider to host an outsourcing of an entire enterprise data center? I was wrong in my post yesterday about the MQ, it’s not clueless, it’s schizophrenic from attempting to combine too many incompatible requirements together as “one market”. This gives you the odd result of penalizing the 900 pound gorilla of the cloud computing market.

The other problem is that the disparate elements that appear to be blended into this mess simply aren’t found in the title. Outsourcing an entire enterprise data center isn’t covered by the title, requiring dedicated private servers for non-web hosting purposes isn’t covered. They may be other things that customers who want cloud computing or web hosting services might want, but they can’t all be welded together in one big Frankenstein monster with that title. Putting a correct title on this that reflects what it really contains won’t help the schizophrenia but it would be more honest.

This thing is a mess, no matter how you slice it.

Gartner: clueless in the cloud.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I’ve just been reading about the uproar over the Gartner “Magic Quadrant” for cloud computing infrastructure. I think they need some remedial education before they pass themselves off as cloud computing pundits.

Defining “cloud computing” precisely is something that people have been squabbling over quite some time now. This is because most of the people squabbling can’t separate the core characteristics that truly define cloud computing from all the little details that define the many flavors of it.

Even though people tend to disagree on what cloud computing really is, it’s pretty clear what it is NOT. It isn’t “classic” data center services. This is what had me shaking my head over Gartner declaring Amazon weakness because they don’t offer co-location, or dedicated private physical servers.

Having started as a classic data center provider here, SwiftWater Telecom, my own operation, provides both classic data center services such as co-location AND cloud computing services and the combination of these things gives us more flexibility to meet customer’s needs through combination. On the other hand, the classic side of the coin isn’t a weakness or a strength of the cloud computing side. It just means I have a wider range of tools to satisfy more customer needs.

After previously having gone around with Lydia Leong of Gartner about a hairbrained suggestion to chop public clouds into private ones (“Cloud computing: another ridiculous proposal.”), I can only conclude that they only have enough handle on cloud computing to be dangerous.

Trying to mix a requirement for traditional data center offerings in to the equation when
when the subject is supposed to be “cloud computing infrastructure” is the most clueless thing I’ve seen in quite a while.

What you REALLY should be asking your cloud computing provider.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I’ve been reading quite a bit of back and forth over cloud computing provider Service Level Agreements and endless checklists that purport to be everything a customer should ask a potential cloud computing provider. There’s a lot of aimless noise and sniping back and forth that is missing a very critical point.

Let me start of by saying, as an approach to insuring service uptime, classic percentage based SLAs are worthless. The SLA tells you nothing about whether you can expect your cloud computing provider to keep your service running or not, only the penalty if they fail. Your goal as a cloud computing customer isn’t to extract the maximum compensation for the failure of your cloud service, it’s to insure that your cloud service doesn’t fail in the first place!

Failures of data center hardware are an inevitable fact of life, even with the most conscientious engineering and operation. The data center failures of the past year have shown that even the big data centers fall well short of the “most conscientious engineering and operation” goal. Given this fact of life, here are the things you should REALLY be asking your cloud computing provider.

1. Do they have the ability to automatically restart workloads on the remaining running physical cloud capacity if part of the capacity fails.

This is something that has stood out like a sore thumb with a lot of the big cloud providers. Failures of underlying physical hardware on Amazon’s AWS service kills the workloads that were running on that hardware, even though the vast amount of capacity is still up and running in the cloud. If your cloud provider can’t automatically restart your workload in case of failure, run away.

In the SwiftWater Telecom cloud (using the Xen Cloud Control System) failed workloads are restarted automatically on remaining cloud capacity in minutes, not hours.

2. Do they offer a way to insure that redundant virtual servers never end up on the same physical server?

It doesn’t do much good to have redundant virtual servers in the cloud if they all die when one single physical host dies.

In the SwiftWater Telecom cloud, we offer what we call the Ultra Server. The Ultra Server combines redundant virtual servers with the “unfriend” feature of XCCS. Unfriending insures that the redundant servers never end up on the same physical server.

This means that nothing but a “smoking hole disaster” would ever cause a complete outage of an Ultra Server. Combine that with the automatic virtual server restoration of XCCS and the option to set up the Ultra Server between our separate data centers, and you have a cloud powered virtual server that is the next best thing to indestructible.

Stop concentrating on the penalties for failure, ask the right questions to find the cloud computing provider who has the right tools to keep your service from failing in the first place.

Jackass: data center edition


Vern Burke, SwiftWater Telecom
Biddeford, Maine

I was just reading a piece about data center temperature levels, cooling, and the ASHRAE recommendations. It’s not the ASHRAE recommendations that are the problem here.

First of all, is anyone really still running servers that can’t handle the ASHRAE recommended maximum inlet temperature of 80 degrees for 20 minutes out failing the hardware? My oldest machines in service are older Rackable Systems 2x dual core Opteron 1U boxes with 2004 bios dates on the Tyan motherboards. These servers are not only perfectly happy at the ASHRAE maximum recommendation, they will run rock solid for months at high 90 degree inlet temperatures in free air cooling configurations.

The next thing is, a 30 degree inlet to outlet temperature differential is “good”? Really? My old Rackables with their highly congested 1U cases produce 13 degrees differential between inlet and outlet. If you have a server gaining 30 degrees at the outlet, you have a server that has a severe problem with its internal airflow. Of course, restricting airflow will make the server fans work harder, driving up the amount of power they require.

So, the original poster pulled a truly jackass stunt:

1. He “accepted” and commissioned a new data center without properly testing the systems.

2. He ran live servers in a data center with no power backup for the cooling systems, only the servers themselves.

3. He allowed servers to run to failure in a completely defective environment. Of course, we’re never told what the actual inlet temperature was on the servers when they failed, it could have been far higher than the ASHRAE recommended maximum.

The problem here isn’t the inlet temperature recommendation, it’s a defective data center design combined with defective operations (did anyone do a maintenance plan before running that fire alarm test?).

I guess if you can’t figure out that backup power for the servers isn’t adequate to be running anything without backup power for the cooling, then you should unload lots of money running your data center temperature low enough to give you time to fix a totally preventable goof.

As for the rest of us, we’ll avoid designing and operating in jackass mode.

A green networking idea for cloud computing and data center.


Vern Burke, SwiftWater Telecom
Biddeford, Maine

After commenting on the piece questioning whether cloud computing really saves energy, I came up with an idea that might simplify the whole energy cost of the network issue. Yup, it’s another metric! 🙂

The issue here is, how do you account for the energy cost of transferring data across the Internet to a cloud computing provider? After all, consider the massive amount of networking devices involved in the Internet and that Internet routing protocols are geared towards bandwidth efficiency rather than energy efficiency.

So, why not create an energy based routing protocol? For the sake of discussion, I’ll call this OGPF (Open Greenest Path First). Unlike OSPF (Open Shortest Path First) which looks for the shortest way from point A to point B using weighted “length” metrics, OGPF could use an energy metric for every device in the path, possibly including how the device is powered.

Now you have a network that’s automatically as green as possible and you have a way to at least get a measurable idea of what’s going on across a network wide link.

Will cloud computing actually save energy? Overcomplicating it.


Vern Burke, SwiftWater Telecom
Biddeford, Maine

I just read a piece discussing the debate over whether cloud computing actually saves energy or not. This piece shows that you can get silly about trying to quantify the unquantifiable.

The argument is really very simple here. The idea is that the energy cost of the networking equipment required to move data back and forth to a cloud computing provider offsets the energy gain from an end customer dumping an inefficient local data center in favor of cloud computing. If the original claim of cloud computing energy saving is over simplified, so is the argument against it.

First, any claim about the energy cost of the network would require a detailed analysis of how much energy is required to move a bit of data from point A to point B. Good luck trying to account for every single piece of power consuming electronic equipment in any path through the Internet. Good luck trying to account for every single piece of power consuming electronic equipment in every POSSIBLE path through the Internet that the data could take. Remember, you not only have to account for the networking equipment but also the communications carrier’s facilities that provide the link.

Second, it’s easier to make this claim when you have a local data center that’s only supporting local users. Add in remote users or add in a public facing website and now you’re generating cross Internet traffic from nearly anywhere. Take the first issue and multiply it thousands of times.

In reality, the energy cost of moving data is pretty low compared the 95% under utilization rate commonly found in small local data centers. Wasting that much energy leaves a LOT of room to absorb the energy needed to ship bits across the network.

Add to this the massive economies of scale involved in cloud computing and I’d be hard pressed to see a situation where the energy cost of the network outweighed the energy saving of cloud computing.

Cloud computing: another ridiculous proposal.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I was just reading a piece about what keeps companies off of public cloud computing services. How much do you have to bastardize a cloud before it isn’t cloud computing anymore?

This issue is the same old cloud computing security red herring. People will share network bandwidth and storage space just fine, but there’s just something sinister about sharing the physical compute power. What you end up with is a lot of vague “well it might happen!” with no specifics and no examples where it has happened, despite massive use of public cloud computing services for sometime now.

So what’s the fix for this? If someone orders a single virtual machine, dedicate them an entire physical server so they can be all warm and fuzzy that nobody is close enough to do something malicious to them with those unspecified hypervisor security holes.

Does anyone else see anything wrong with this idea? Sure, you can probably make a cloud automatically carve out a physical chunk of itself for one customer. Unfortunately, when you do that, it’s not “cloud computing” anymore. At the very least, you lose most of the characteristics that make it “cloud” and turn it into a standalone virtualization server or, heaven forbid, effectively an old fashioned standalone dedicated server. It just simply isn’t cloud anymore.

Do this and now you lose the utilization and energy efficiency that make cloud computing a much more cost effective proposition. Is the customer with one virtual machine going to be willing to pay the cost of a dedicated server for the privilege of segregating a physical cloud host for their private use or are they going to expect cloud pricing because you’re still calling it “cloud”? If they’re paying dedicated price for a single virtual machine and their own private physical server, why not just do a separate dedicated server?

The one sole advantage that this idea has is automation. It would certainly be easier to dedicate and undedicate a physical server in a cloud environment. On the other hand, this would be likely to be an operations nightmare, trying to keep enough excess cloud servers available to accommodate losing an entire physical server when you provisioned a single VM. Of course, keeping that much excess capacity running also kills the heck out of your efficiency and, by extension, your costs.

Nice try but this idea is a swing and a whiff.