Tag Archives: Intel

Wednesday data center tidbits.


First up this morning is the news about HP’s new 20 foot data center container and here. Let’s do a little math on this. This container is $600,000 for 500U of space. Just space, no IT equipment, no cooling, no power gear, just a steel box with racks. That’s $1200 per 1U. I’m pretty sure I can rehab enough conventional floor space for 500U of equipment for FAR less than $1200 per 1U. Overpriced doesn’t begin to describe it.

Next up is Intel building 8 huge solar power arrays, 3 of which are in Oregon, which is surprising, given that Oregon touts the tremendous amount of dirt cheap hydro power they have. There’s really no better use for the wasted space of a flat building roof.

Vern

swiftwater telecom rcs cloud computing logo

Data center bits and pieces ….


I was just reading a few interesting things about the NSA Utah data center. Out of a 1 million sq ft facility, only 100,000 sq ft of actual data center space? I can’t wait to see the PUE for this sucker :).

After reading about low latency financial system access , I’m sure this would provide an unfair advantage to microsecond fast electronic traders. I just have no idea what you’d ever do about it (Mandate a minimum ping rule?).

Reading today about Intel’s low power microserver. I just can’t see this being anywhere near as efficient as virtual machines and since when is whacking 2 cores off a 4 core CPU to reduce power consumption earthshattering news? Gain the power saving without crippling the CPU and then I’ll be impressed.

Vern, SwiftWater Telecom

And the data center bozo of the week is….


(cue the drum roll and the gorgeous blonde in the slinky dress to hand me the envelope) Yes, it’s the biggest mouth in infotainment, the financial industry equivalent of Daffy Duck, Jim Cramer! I’d have used the village idiot quip but I got beat to it by the guys at Data Center Knowledge.

In an overly simplistic analysis (and probably a reading of goat entrails), Cramer has concluded that since servers based on Intel’s Nehalem CPU can replace 8 older servers, data centers everywhere are going to end up with tons of empty space and the whole industry will collapse. I can only conclude he has a secret deal to shill for Intel.

First of all, the 8 older servers replaced sounds like something read off a marketing brochure, sort of like 4 out of 5 dentists recommend. I don’t think I’d try replacing 8 of my servers with one Nehalem based server. Of course, it probably sounds good to people who don’t know anything about tech.

Of course, this also that every bump in server technology has only increased the demand for more servers and hence, more data center space. Recent reports are that data center demand is growing 3x the rate of supply. The only data centers I’m aware of anywhere with tons of empty floor space to burn are the older facilities that just ran out of power capacity, leaving them with floor space and no more power.

I’m sure all the big boys that have been building data center space in leaps and bounds are going to be so disappointed that Intel’s wonder chip is going to make these huge facilities all obsolete overnight.

Jim, in recognition of this award, we are pleased to present you with a multi-colored wig and a squeaky red nose. Wear them with pride!

Vern, SwiftWater Telecom

data center, web hosting, Internet engineering

Greening the data center : The Atom-ic Age?


Today I’ve been reading about servers based on the low power Intel Atom CPU for the data center. I think it’s an idea being rendered obsolete by virtualization.

The Intel Atom CPs are pared down, modestly performing chips, optimized for power efficiency and found in such things as netbooks (mini laptops) and bottom end desktop PCs (anyone remember the AMD Geode?). The push right now is to build these into energy saving servers for green data center use. While I do agree that these servers could be a better replacement for really old equipment without energy saving features, the niche that they’re aiming for is rapidly disappearing.

The old scheme of using a single physical server to serve one lightly loaded function is dead. Consolidation of services has been the watchword for some time, no longer do we have separated machines for web, DNS, email, etc. Very few machines in the modern data center are dedicated to serving a single low traffic site and virtualization is only serving to accelerate the trend. The problem with Atom based machines is that they are overkill for running a low traffic dedicated site but underpowered to do an efficient job of running high utilization environments such as virtual servers.

It doesn’t matter how power saving the hardware is, what matters in efficiency is simply the amount of work accomplished for a given amount of power. Unfortunately, no collection of single servers operating at low levels of utilization, no matter how energy saving they are, are ever going to match a modern energy efficient server with high utilization.

The second issue is that the Atom server idea does little to reduce space and complexity issues in the data center. Would you rather have to deal with the cabling and airflow issues of an entire rack of underutilized servers or a single reasonably high powered server condensing and virtualizing? I know which one I’d prefer.

Relegate the Atom to applications where you MUST have a separate machine regardless of the utilization but don’t load up your data center with these turkeys.

Vern, SwiftWater Telecom

Virtual Private Servers and Workstations

Greening the data center: Listen to your servers talk …


This morning, I’ve been reading about using input from server temperature sensors to adjust cooling system response. I think this idea goes right some ways and wrong others.

On the plus side, you can never have too much data about what’s happening to the data center environmental conditions. Using the sensors already built in to the servers makes perfect sense, as does the use of standard SNMP to collect the information. That using this input to attempt to make fine control responses to compensate for inefficient cooling systems is the most effective response to the cooling problems listed is questionable.

The first point they raise is hot spots at the top of cabinets in a raised floor cooling environment. In raised floor cooling, the cool air is pumped under the raised floor and up to the rack. This is counterproductive, since it requires forcing the air against the natural tendency to sink. Feed the cold air from above and not only do you eliminate the uneven cooling, you also spend less energy pumping the air around, no heroic measures required.

The next point is that they assume an environment with no hot aisle containment, hence the worry about excess ventilation driving the hot exhaust air up and over the top of the cabinet and back into the cold aisle.

If you’re going to use hot aisle/cold aisle, it’s an exercise in imperfection to attempt to separate hot and cold without a physical containment. It’s a total waste of effort and lots of unnecessary complication to control this by varying fan speeds. The easy and effective answer is to use a hot aisle containment system or, as I’ve said before, use chimney type cabinets and duct the hot exhaust air for perfect control and separation.

Finally, presuming you have efficient delivery and exhaust that works WITH the natural movement of the air and you maintain an appropriate reserve to supply the servers. the servers themselves will take care of adjusting their own air needs. This assumes that you’re running modern servers that are capable of adjusting their own fan speeds. This means that the cabinet fans don’t need to be speed adjusted at all, since they will never put through more than the total airflow that the servers are putting through.

So, what does this all mean? Over complexity is no substitute for dumping an obsolete cooling design for an efficient one. It’s as simple as using the natural movement of the air to move itself where it’s needed and an easy, fool proof, way to separate hot and cold.

Sometimes it really IS just that simple.

Vern, SwiftWater Telecom

data center, web hosting, Internet engineering