Today’s commentary comes from reading about the new University of Illinois supercomputer data center. I can’t argue with the impressive facility but some of the stated design philosophy leaves a lot to be desired.
The thing that jumped out at me was the comment that the most common problem was designing the data center to be too flexible. From what I see of today’s data centers, this is exactly wrong.
It’s easy to design a perfectly optimized facility when you’re only supporting a fixed installation of one vendor’s equipment. All variables are fully characterized and it’s easy to only exactly what is needed. Save money on building the facility and then you can afford to tear it out and replace it when the system it was designed to support is gone.
Unfortunately, that’s not the way the real world works. Data center servers are typically on a 3 year refresh cycle and server technology can and does changer radically in that kind of time. The changes in server technology also mean radical changes in power and cooling requirements. The landscape is littered with the relics of data centers out of power and out of cooling capacity because they weren’t designed to be flexible enough to accept the changes the future brought.
The next thought is that it would be nice to design IT equipment so it doesn’t need anything special from the building other than power, cooling, and a raised floor. As if there’s only one standard type of power, one standard type of cooling, and heaven forbid that some of us believe that there are more efficient designs than raised flooring. The fact is that building design is integral to efficient green cooling designs (I’ve written in previous posts about clerestory monitors and free air cooling).
Unless you’re building a special single purpose data center to support a single fixed system, I wouldn’t advise forgoing flexibility, otherwise, you’ll end up in a museum with the dinosaurs.
Vern, SwiftWater Telecom