Tag Archives: the planet

Saturday data center tidbits


Tonight, from the Twitter stream, we pull The Planet’s server speed building challenge at the SxSW conference. It’s pretty easy just snapping on the CPU heat sink, the mark of true greatness is being able to insert a PGA CPU in the socket in a hurry without bending any pins :).

There’s nothing faster to set up than one of our cloud powered virtual servers!Check out our new cloud computing products today!

Vern

swiftwater telecom rcs cloud computing logo

Advertisements

Data center electrical safety, avoide the flambe …


Today I’ve been thinking about the recent rash of electrical data center fires and failures, from The Planet’s explosion in Texas to the Fisher Plaza fire to the recent Omgeo fire in Boston (I ran across some more details about the Fisher Plaza fire). In this post I’ll be discussing how to deal with preexisting electrical infrastructure in your data center facility.

Unless you’re building a facility from scratch or doing a bare walls gut out of a building, chances are very good that you’ll be dealing with preexisting electrical facilities. This is especially true if you lease space in a building. So, how can you identify what can be safely reused and what should go on the junk pile?

Step one, throw away the aluminum. Aluminum wire and aluminum buss bars (such as the one that failed at Fisher Plaza) tend to have problems with loosening connectors as well as oxidation (if they’re not protected properly). These two problems can result in wasted energy (not what you want for a green data center), over heating, and even fires from electrical arcs, especially if the conductor is carrying thousands of amps of current). Copper conductors, on the other hand, while being more expensive, do not suffer from either of these problems.

If you are going to run any aluminum, inspect all connections before operating and establish a maintenance program to tighten all connections periodically.

Second, ditch any panels and over current protection devices made before the late 1960s. I’ve seen “fused neutral” panels from the early 1900s (a serious safety hazard) and circuit breakers from the late 1930s still in service in commercial buildings. When circuits breakers get very old, not only may they not trip but the lubrication inside the breaker may harden up, locking the breaker on. I’ve actually seen this happen, a Square D “992” breaker failed to trip on fault and basically pumped a full 600A entrance into a 15A circuit. The result was the destruction of the panel buss bars and much smoke, as well as a service outage.

Third item is transformers. All oil cooled transformers belong outside the building, period. Transformer explosions and fires are not good anywhere but the last place you want them is anywhere near your data center equipment. Older dry type transformers should be inspected for signs of over heating. I’ve seen transformers in live operation that had temperatures of 150 deg F on the outside of the case. Transformers that have been abused this way are much more likely to have insulation breakdown. If it’s suspect, replace it before you end up with a service outage or worse.

Finally, replace obsolete wire types. Type R (rubber with a flammable canvas outer jacket), TW, and THW are all still commonly found in buildings. In additional to now being up to modern current carrying standards, any of these may exhibit degradation of the insulation. Once again, it’s just a catastrophe waiting to happen.

This isn’t an exhaustive list but if you follow the recommendations here, you’ll avoid the biggest electrical traps that have plagued data centers for the last few years.

Vern, SwiftWater Telecom

data center facilities engineering

Greening the data center: Out with the old …


This evening I’ve been reading a blog article about The Planet running tower cases in their data centers. I can’t see for the life of me how this makes sense.

It’s certainly true that tower case setups offer flexibility. There’s space for pretty much whatever add in cards you could want and plenty of room for ridiculous amounts of drives. That there’s more room for air flow inside and it’s easier to get the heat out of them with lousy cooling is not in doubt.

On the other hand, with 1TB drives common and inexpensive, is there really a need for a dozen drive bays? Especially since the trend is well away from massive amounts of server attached storage in the data center? Not to mention the amount of power low utilized drives waste. Not very green at all.

Since even the most compact of 1U server configurations can be had with almost everything desirable for ports, controllers, and video, is there really the need for major amounts of slots anymore? It seems to me that most of the upgrades that would be put in such a system would be absolutely useless in a server (who needs a gamer video card in a co-located server?).

What is a point is that there is a massive difference in space consumed between towers and rackmount cases. The Planet seems to think that’s good, since low density means less heat and less power. Unfortunately, it also means less revenue for the facility. Operating inefficiently because we don’t want to bother with a good cooling design is a lousy tradeoff.

The biggest nail in the coffin to towers in the data center is, how do you control cooling air flow? Towers on open racks would be virtually impossible to separate the hot exhaust air from the cold intake air. It’s the nightmare of anyone who cares in the slightest about greening data centers.

There was some economic justification to doing this 10-15 years ago. It’s 2009, time to relegate long obsolete data center design to ancient history.

Vern, SwiftWater Telecom

data center, web hosting, Internet engineering