This evening I’ve been reading a blog article about The Planet running tower cases in their data centers. I can’t see for the life of me how this makes sense.
It’s certainly true that tower case setups offer flexibility. There’s space for pretty much whatever add in cards you could want and plenty of room for ridiculous amounts of drives. That there’s more room for air flow inside and it’s easier to get the heat out of them with lousy cooling is not in doubt.
On the other hand, with 1TB drives common and inexpensive, is there really a need for a dozen drive bays? Especially since the trend is well away from massive amounts of server attached storage in the data center? Not to mention the amount of power low utilized drives waste. Not very green at all.
Since even the most compact of 1U server configurations can be had with almost everything desirable for ports, controllers, and video, is there really the need for major amounts of slots anymore? It seems to me that most of the upgrades that would be put in such a system would be absolutely useless in a server (who needs a gamer video card in a co-located server?).
What is a point is that there is a massive difference in space consumed between towers and rackmount cases. The Planet seems to think that’s good, since low density means less heat and less power. Unfortunately, it also means less revenue for the facility. Operating inefficiently because we don’t want to bother with a good cooling design is a lousy tradeoff.
The biggest nail in the coffin to towers in the data center is, how do you control cooling air flow? Towers on open racks would be virtually impossible to separate the hot exhaust air from the cold intake air. It’s the nightmare of anyone who cares in the slightest about greening data centers.
There was some economic justification to doing this 10-15 years ago. It’s 2009, time to relegate long obsolete data center design to ancient history.
Vern, SwiftWater Telecom