Jackass: data center edition


Vern Burke, SwiftWater Telecom
Biddeford, Maine

I was just reading a piece about data center temperature levels, cooling, and the ASHRAE recommendations. It’s not the ASHRAE recommendations that are the problem here.

First of all, is anyone really still running servers that can’t handle the ASHRAE recommended maximum inlet temperature of 80 degrees for 20 minutes out failing the hardware? My oldest machines in service are older Rackable Systems 2x dual core Opteron 1U boxes with 2004 bios dates on the Tyan motherboards. These servers are not only perfectly happy at the ASHRAE maximum recommendation, they will run rock solid for months at high 90 degree inlet temperatures in free air cooling configurations.

The next thing is, a 30 degree inlet to outlet temperature differential is “good”? Really? My old Rackables with their highly congested 1U cases produce 13 degrees differential between inlet and outlet. If you have a server gaining 30 degrees at the outlet, you have a server that has a severe problem with its internal airflow. Of course, restricting airflow will make the server fans work harder, driving up the amount of power they require.

So, the original poster pulled a truly jackass stunt:

1. He “accepted” and commissioned a new data center without properly testing the systems.

2. He ran live servers in a data center with no power backup for the cooling systems, only the servers themselves.

3. He allowed servers to run to failure in a completely defective environment. Of course, we’re never told what the actual inlet temperature was on the servers when they failed, it could have been far higher than the ASHRAE recommended maximum.

The problem here isn’t the inlet temperature recommendation, it’s a defective data center design combined with defective operations (did anyone do a maintenance plan before running that fire alarm test?).

I guess if you can’t figure out that backup power for the servers isn’t adequate to be running anything without backup power for the cooling, then you should unload lots of money running your data center temperature low enough to give you time to fix a totally preventable goof.

As for the rest of us, we’ll avoid designing and operating in jackass mode.

A green networking idea for cloud computing and data center.


Vern Burke, SwiftWater Telecom
Biddeford, Maine

After commenting on the piece questioning whether cloud computing really saves energy, I came up with an idea that might simplify the whole energy cost of the network issue. Yup, it’s another metric! 🙂

The issue here is, how do you account for the energy cost of transferring data across the Internet to a cloud computing provider? After all, consider the massive amount of networking devices involved in the Internet and that Internet routing protocols are geared towards bandwidth efficiency rather than energy efficiency.

So, why not create an energy based routing protocol? For the sake of discussion, I’ll call this OGPF (Open Greenest Path First). Unlike OSPF (Open Shortest Path First) which looks for the shortest way from point A to point B using weighted “length” metrics, OGPF could use an energy metric for every device in the path, possibly including how the device is powered.

Now you have a network that’s automatically as green as possible and you have a way to at least get a measurable idea of what’s going on across a network wide link.

Will cloud computing actually save energy? Overcomplicating it.


Vern Burke, SwiftWater Telecom
Biddeford, Maine

I just read a piece discussing the debate over whether cloud computing actually saves energy or not. This piece shows that you can get silly about trying to quantify the unquantifiable.

The argument is really very simple here. The idea is that the energy cost of the networking equipment required to move data back and forth to a cloud computing provider offsets the energy gain from an end customer dumping an inefficient local data center in favor of cloud computing. If the original claim of cloud computing energy saving is over simplified, so is the argument against it.

First, any claim about the energy cost of the network would require a detailed analysis of how much energy is required to move a bit of data from point A to point B. Good luck trying to account for every single piece of power consuming electronic equipment in any path through the Internet. Good luck trying to account for every single piece of power consuming electronic equipment in every POSSIBLE path through the Internet that the data could take. Remember, you not only have to account for the networking equipment but also the communications carrier’s facilities that provide the link.

Second, it’s easier to make this claim when you have a local data center that’s only supporting local users. Add in remote users or add in a public facing website and now you’re generating cross Internet traffic from nearly anywhere. Take the first issue and multiply it thousands of times.

In reality, the energy cost of moving data is pretty low compared the 95% under utilization rate commonly found in small local data centers. Wasting that much energy leaves a LOT of room to absorb the energy needed to ship bits across the network.

Add to this the massive economies of scale involved in cloud computing and I’d be hard pressed to see a situation where the energy cost of the network outweighed the energy saving of cloud computing.

Cloud computing: another ridiculous proposal.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I was just reading a piece about what keeps companies off of public cloud computing services. How much do you have to bastardize a cloud before it isn’t cloud computing anymore?

This issue is the same old cloud computing security red herring. People will share network bandwidth and storage space just fine, but there’s just something sinister about sharing the physical compute power. What you end up with is a lot of vague “well it might happen!” with no specifics and no examples where it has happened, despite massive use of public cloud computing services for sometime now.

So what’s the fix for this? If someone orders a single virtual machine, dedicate them an entire physical server so they can be all warm and fuzzy that nobody is close enough to do something malicious to them with those unspecified hypervisor security holes.

Does anyone else see anything wrong with this idea? Sure, you can probably make a cloud automatically carve out a physical chunk of itself for one customer. Unfortunately, when you do that, it’s not “cloud computing” anymore. At the very least, you lose most of the characteristics that make it “cloud” and turn it into a standalone virtualization server or, heaven forbid, effectively an old fashioned standalone dedicated server. It just simply isn’t cloud anymore.

Do this and now you lose the utilization and energy efficiency that make cloud computing a much more cost effective proposition. Is the customer with one virtual machine going to be willing to pay the cost of a dedicated server for the privilege of segregating a physical cloud host for their private use or are they going to expect cloud pricing because you’re still calling it “cloud”? If they’re paying dedicated price for a single virtual machine and their own private physical server, why not just do a separate dedicated server?

The one sole advantage that this idea has is automation. It would certainly be easier to dedicate and undedicate a physical server in a cloud environment. On the other hand, this would be likely to be an operations nightmare, trying to keep enough excess cloud servers available to accommodate losing an entire physical server when you provisioned a single VM. Of course, keeping that much excess capacity running also kills the heck out of your efficiency and, by extension, your costs.

Nice try but this idea is a swing and a whiff.

Thursday data center tidbits: arguing against virtualization, NASA data goes free range


Vern Burke, SwiftWater Telecom
Biddeford, ME

First up today is a question I ran across asking if there are any circumstances that argue against server virtualization in the data center. The only thing I can think of is the requirement for specialized hardware. If your applications require specialized hardware, keep them on their own server, otherwise, get virtualizing!

The next piece is about NASA discovering that they’ve been selling excess PCs without properly cleaning sensitive data off them. And everyone is in a panic over cloud computing being a tremendous security threat? The biggest security threat is leaving your data in the care of people who have no focus on the security of it.

Data center doof of the week goes to Tumblr for managing to kill a database cluster AND kill their network for a day for the sake of scheduled maintenance. Squeaky red noses are enroute!

The net neutrality red herring: Anti net neutrality, Comcast and Level 3


Vern Burke, SwiftWater Telecom
Biddeford, Maine

I’ve been watching the debate over the Comcast vs Level 3 peering dispute. I’m not a big fan of Comcast by any means, but any attempt to paint this as a net neutrality fight is ridiculous.

The net neutrality issue is really very simple. Large ISPs thought it was a good idea to charge content providers for priority paths through their network to reach their subscribers. Pay the “toll” and your traffic speeds right along smoothly and your users love you. Don’t pay the “toll” and you take your chances with a cow path instead of a nice smooth 10 lane superhighway. This kind of extortion is bad in so many ways that I can’t begin to count them.

Peering, on the other hand, is simply an agreement between two providers that they will exchange traffic between them directly, rather than going through the public Internet backbone. This can save both providers by removing traffic from their expensive Internet backbone links and it speeds up the traffic flow between both providers dramatically. No charge peering works only if the peering arrangement is of equal benefit to both sides. This means that the traffic flows roughly evenly in both directions.

My understanding of this issue is that Comcast and Level 3 have had a mutual peering arrangement for some time now. Now Level 3 is demanding special peering access deep into Comcast’s network for the sole purpose of speeding their traffic to the end user. This is an arrangement that solely benefits Level 3 but they insist on Comcast paying the entire cost.

It’s quite clear that this is, in no way, a net neutrality issue. If anything, this is an ANTI net neutrality issue. Instead of the ISP wanting the provider to pay for premium access through the ISP’s network, the provider is demanding the the ISP give them free premium access deep into the ISP’s network and attempting to claim that they are somehow entitled to it.

Level 3 needs to man up and come up with an arrangement that mutually benefits everyone involved or pay the fair cost of the special access they want that benefits nobody but them. Level 3 can throw this at the net neutrality wall all they like, but it doesn’t stick.

6 “real” ways to know it’s time to renovate your data center.


I was just reading this piece about 10 ways to tell that your data center is overdue for renovation. Great idea but, unfortunately, that piece was WAY off track, so I’m going to list my 6 ways here.

1. Cooling

You don’t need a fancy expensive air flow study to get an idea that your data center has cooling issues. A simple walk through will make significant hot or cold spots very obvious. Significant hot or cold spots means it’s time to rework things.

2. Space

If you wait until you can’t cram one more piece of gear in, as the article suggests, you’re going to be in a heap of trouble. Make sure all idle equipment is removed and set a reasonable action limit (such as 75% full) to address the space issue BEFORE you run up against the limit.

3. Power

Contrary to the article, reading equipment load information is NOT a sign that your data center needs to be renovated, it’s just good practice. Nuisance trips of breakers and the need to reroute power circuits from other areas of the data center are a dead giveaway that the original power plan for the data center needs a serious overhaul.

4. Strategy

You can’t create an IT strategy without considering technologies as the article would have you believe. First, inventory and evaluate the existing data center, identify where it’s falling short of meeting business requirements and goals, and then consider the technology to get it back on track. Every step in it’s proper order.

5. Performance

When it becomes apparent that the existing data center infrastructure is going to fail on any of the first four points with anticipated changes coming up, it’s time to retire it. Don’t let the problems occur and THEN fix them.

6. Organization and documentation

If touching any part of the data center is a major crisis because of over complication of the systems and/or inaccurate, incomplete, or just plain missing documentation, it’s a clear signal to get it revamped and under control before it causes a complete disaster.