Tag Archives: green data center

Show and tell: Pictures from the #datacenter


I thought I’d do something a little different today and put up some pictures and info about our green data center and cloud computing infrastructure. Enjoy!

Data center specs:

Power: 48VDC 800A(primary), 130VDC (available), 208Y120VAC (available)
Power density: 24KW per cabinet
Cooling: direct free air cooling with conventional backup
Power backup: multiple battery strings with greater than 8 hour run time
Cabinet type: vertical chimney
Lighting: T8 fluorescent with electronic ballast

An average set of cabinets:

(These are Rittal vertical chimney cabinets.)

The heart of the energy efficient 800A 48VDC power plant:

A 48VDC distribution panel:

(This is a redundant A-B dual feed panel that feeds subpanels in individual cabinets.)

A Peco II 48VDC inverter:

(The inverter allows us to run the small amount of equipment that’s only available with AC power from the same 48VDC power plant.)

Cloud computing servers:

(These are an example of the servers supporting our cloud computing operations. These are 2x AMD Opteron, 1U, 1/2 length case.)

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Tuesday data center tidbits: #cloudcomputing negative hype, Yahoo data center cooling.


First up today is reading about the security PR problem of cloud computing. This just goes to show that hype can be spread both ways. Don’t like something? Nothing spreads FUD like the Internet, throw out a lot of vague “well it COULD happen!” and watch the chaos. The fact is that the largest risk to anyone on a cloud computing virtual machine is exactly the same as it is for a dedicated server. Bad system administration practices are still be easiest way to violate a server’s security.

Next is reading about the Yahoo computing “coop” data center design being the shape of things to come. Using clerestory monitors for cooling is more like shades of the past (these are clerestory monitors, NOT cupolas). The clerestory monitor has been around for 150+ years on the New England textile mills, among others (I’ve written previously on the blog about repurposing former mill space for data centers).

Last is the piece about the Yahoo data center in upstate New York. The takeaway point from this is that traditional data center raised floor takes 3x the amount of fan horsepower because the air flow is so lousy. Why in the heck would anyone these days put raised floors in a data center?

Email or call me or visit the SwiftWater Telecom web site for cloud computing services and green data center services!

Vern

swiftwater telecom rcs cloud computing logo

Friday data center tidbits.


I was just reading today about why it’s green to repair IT equipment. The sad fact is, most data center gear is not repairable at a level lower than swapping major components such as full boards and that’s only going to get worse with newer manufacturing techniques (you trade off reduced cost and reduced resource usage up front for not being able to repair it). Getting the most life out of servers and other data center gear is certainly a smart thing to do, but when it reaches the point where the cost of operating it isn’t paying for itself, it’s time for it to take a hike.

Next up is the piece about cloud computing lifting business out of the muck. The key point here for me is that IT departments are under pressure to provide rapid responses. Like cloud computing or not, waffle and fudge over vague cloud security boogeymen, preach the evil conspiracy of cloud computing (Richard Stallman), but this is going to happen and it’s going to pick up steam. Run with the bulls or end up road pizza.

If you want to stay ahead of the bulls, email or call me or visit the SwiftWater Telecom web site for cloud computing services!

Vern

swiftwater telecom rcs cloud computing logo

Friday data center tidbits.


First up today is yet another story about data center AC vs DC power efficiency, with the usual sprinkling of BS. Required cable size is based on voltage vs current, not whether the power is AC or DC. Higher voltage means lower current means smaller conductors. AC or DC power isn’t a factor. The idea that larger cables used in lower voltage DC power are more susceptible to arcing because of their size is ridiculous. Any well designed data center DC power plant will be just as safe as any AC power system.

Second up is the story about Horizon Data Center Solution’s new expansion. The tidbit here is not that they’re expanding, but that they’re measuring the facility in megawatts worth of space, not square feet worth of space. Physical space in the data center is becoming a very secondary measurement, it’s all about the power now.

Email or call me or visit the SwiftWater Telecom web site for green data center DC power engineering, construction, and operation that saves money without the hype.

Vern

swiftwater telecom rcs cloud computing logo

Monday data center tidbits.


First up today is the story pondering whether green requirements will hurt co-location providers. There are a couple of things to look at here. First, co-location facilities can’t cut “their” power usage unilaterally. You can’t expect co-lo facilities to cut their power usage the way other facilities can because they don’t have control over the equipment housed there (metered power is the best way to encourage green behavior). Second, claiming that co-lo’s aren’t reliable because one had 2 generators fail (but kept the customers running from the 3rd) is ridiculous. That’s why you have n+1 (or n+2 in this case) for one of the most fallible pieces of equipment in the data center. It’s a little disconcerting they went that deep into the redundancy, but even the big boys have trouble with generators (witness IBM and Air New Zealand last year).

Next up is the post mortem on the Google AppEngine power failure. It’s bad enough to have the power failure in the first place, but not to have anyone who knew enough about the system to make a sensible decision to failover to backup is just silly. When in doubt, you can never go wrong failing over to backup (unless your backup dies too).

Stay tuned this week for an article on diagnosing cloud computing failures and reducing the impact of them!

Email or call me or visit the SwiftWater Telecom web site for cloud computing services and green data center services.

Vern

swiftwater telecom rcs cloud computing logo

Saturday data center tidbits.


Up today is a piece about lamenting the state of Ethernet. There’s no argument that massive data center loads like Facebook’s need all the network capacity that can be thrown at them and 10GB Ethernet is almost certainly not enough. What struck me was the part about the college network’s wiring closets running at 98 deg F and the claim that there was nothing wrong with the HVAC, the switches were just using too much power. Number one, this is just insanely bad network design. Number two, how do you design for what you wished the equipment would draw and then blame the equipment when reality strikes? They say they can’t approve purchases for needed networking equipment when the wiring closets are running at 98 deg F. This is the biggest case of bass-ackwards engineering I’ve ever seen.

Email or call me today for data center and network engineering that isn’t in denial.

Vern

swiftwater telecom rcs cloud computing logo

More green data center DC power myths.


I’ve just been reading about the value of DC power in the data center. in this post, I’m going to correct some of the strange myths and egregious errors that are spread about DC power.

“But the downside is that DC power can require much larger wires to carry the current, thus creating power buildup and arcing that can endanger IT equipment and staff.”

Where do I start on this one. Power does not “build up” in the large conductors of DC power systems, any more than power “builds up” at your unused AC outlet. Not using the DC power doesn’t mean it’s going to build up, over flow, and arc. Mark this one as completely ridiculous.

“.. added that moderately high-voltage DC power poses some safety concerns, where the power can build up and arc.”

Once again, DC power doesn’t “build up”. Any high voltage distribution system that’s inadequately insulated has the potential to do this whether it’s AC or DC. I’ve seen catastrophic failures in common 240VAC and 480VAC circuits that resulted in melted bus bars, holes burned in armored cable, and total destruction of equipment (look at the Fisher Plaza fire where they destroyed 4000A aluminum bus bars).

What does affect the tendency to arc is jacking the voltage up and that’s just the same whether it’s AC or DC, there’s no great mystery to that. There’s a tendency to want to use high voltages to reduce wire size (higher voltage=less current=smaller wire). I myself much prefer to use long time industry standard 48VDC power, which is touch safe and has an extremely small arc over distance.

“”Four-hundred volts DC may be more dangerous than 400-volts AC,” he said.”

In a properly constructed DC power system, there’s no appreciable difference in safety between AC and DC, expect for some possible difference in coming in contact with the energized conductors. This point is really moot since you do NOT want to be coming in contact with either live 400VDC or 400VAC. Either way, someone is getting hurt.

Is there any safety hazard in the DC power plant? Yes, the battery string (exactly the same hazard that a battery equipped AC UPS has as well). Short the unfused battery leads or drop an uninsulated tool into the bus bars and you’re going to be missing some wire or a tool, completely. Batteries can put out 9,000+ amps of current in a fraction of a second in a short condition.

Finally, there’s the efficiency issue. Comparing a DC power plant to a conventional double conversion (or “online”) UPS is not in doubt. The UPS systems that fare better against DC are the type with “eco mode” (the old “standby” UPS). In normal operation, the eco mode UPS powers the load directly from the AC power, so it doesn’t use the power hogging inverter. Of course, this means the data center equipment has to stand the switch over to battery and the efficiency looks just as bad as a double conversion UPS when it’s running from battery.

Call or email me or visit the SwiftWater Telecom web site today and we’ll design a green DC power plant for your data center that will be SAFE and EFFICIENT!

Vern

swiftwater telecom rcs cloud computing logo

In the virtual cloud data center, greenwashing with PUE.


This afternoon I’ve been reading about Elastra making it’s data center cloud more efficient. I’m rather skeptical that metrics meant to be applied to physical servers and facilities have any meaning on virtual ones.

The basic premise of the idea isn’t a bad one. The green benefits of the cloud come from not only more efficient use of the physical servers and infrastructure, but also efficient use of the cloud resources. Wasting cloud resources by overprovisioning memory for a virtual machine, for example, wastes resources in the underlying physical server, which, in turn, wastes data center infrastructure, costing the operator more money. Knowing exactly what an application or combination of applications really needs is a must, otherwise you might just as well stick with a data center full of energy hogging under utilized dedicated physical servers.

The problem I have with the Elastra approach is the metrics presented as a criteria for choosing the virtual configuration, such as PUE. PUE (power usage effectiveness) is simply the total amount of power a data center consumes vs the amount actually used to operate IT equipment. I have issues in general with what PUE is measuring and how the metric is being used, as I’ve written before, but I really have a problem with it in this context.

The problem is, PUE is a data center wide metric. I don’t think it’s possible to single one server out of a data center for any kind of an accurate PUE number (unless you just have one server in the data center). On top of that, how do you even come up with a PUE or even a wattage for a virtual machine that has no physical existence? Add to this the fact that PUE isn’t a static number but constantly changes as environmental conditions and server workloads change.

As I’ve said, I’m far from being against accurately provisioning cloud services for the workload to be run on them, I’m against the misapplication of green metrics intended for physical facilities to virtual machines that can’t possibly produce a meaningful number. Of course, PUE looks good on the press release, but in my book, that’s greenwash.

Vern

swiftwater telecom logo

The green data center, wireless sensors, friend or foe.


Today I have a special request to talk about wireless sensors in the data center. Use of sensors throughout the data center is increasing dramatically but I think there are some serious considerations about using wireless ones.

With the current full scale push for maximum energy efficient green data centers, the addition of sensors to the facility to allow fine grained control is increasing by leaps and bounds. This fine grained control allows data center infrastructure such as cooling systems to be adjusted optimally for spot variations.

On the positive side, wireless sensors are ridiculously easy to deploy in the data center, requiring no cable work. They are also generally inexpensive and very flexible, in that they can easily be moved as equipment configurations changed.

The problem that has been raised with wireless sensors is the security implications. In this case, I’m not talking about data leaking out (it’s unlikely to be a state secret what the temperature at the top of cabinet #107 is) but rather data leaking in.

Imagine for instance a malicious person or persons injecting false data into the sensor data stream. Presuming you’re operating a facilities management system that controls the data center infrastructure from the data provided by the wireless sensors, you are now allowing outside entities some degree of control over the facility. Feed false data from a small selection of sensors and you can now fool the facility management system into allowing a single cabinet in the middle of a thousand to overheat. Fine grained control indeed, just not the way you intended it.

Easier than that would be a simple blanket jamming of the wireless sensor frequencies. Now not only has your system lost control of the facility by virtue of being blinded, how is that system going to react to the loss of all sensors? Could it panic and shut the facility down? Could it place infrastructure equipment in potentially dangerous operating conditions (I’m reminded of the DoD test where they simulated a hacking attack and remotely destroyed a running diesel generator)?

In my opinion, wireless sensors are fine for informational purposes but no facility control system, human or automated, should be making operational decisions based on them. Use hardwired sensors on a dedicated, partitioned network and you can be sure you’re getting the real scoop on what’s going on in your green data center.

Vern

swiftwater telecom logo

Saturday data center tidbits.


Welcome to the first tidbits post of 2010!

The first item comes from this post about abandoning old data center models in the quest for green-ness (greenocity?). I do agree that the rate of innovation will increase in 2010 but that doesn’t mean that everything done before has to be abandoned wholesale. Pick the best of the old, integrate with the new, and don’t reinvent the wheel just because it’s last year’s wheel.

Second comes an article on the 7 deadly sins of IT managers. The one that struck me was sin #4. How many data center failures did we see in 2009 where the data center provider had no idea they had a major outage until customers screamed?

Vern, SwiftWater Telecom