Friday data center tidbits: ghost servers. #datacenter outages, and more!


First up is a piece about what does and what doesn’t work with a green IT strategy. The thing that stood out in this for me was:

“Data center audits inevitably turn up servers with no connections to network cables that remain turned on.”

Anyone who disconnects a server from the data center network and leaves it powered up needs a good swat to the back of the head.

The next piece up is a post from James Hamilton about PUE. As much as I’ve talked here about the flaws in PUE, it’s certainly of some use as an internal metric. The big problem with PUE is trying to compare different data centers based on it, as well as giving out official superiority awards based on it (Energy Star for data centers). It’s probably the most misused metric ever invented.

From the “bozo is contagious” file, we have the recent Bluehost data center outage in Provo, UT. Kudos to Bluehost for actually having a power backup system that worked, squeaky red noses to the local telecom carrier for not only losing Bluehost’s Internet connections but phone service to the whole city as well.

Vern

swiftwater telecom rcs cloud computing logo

Advertisements

4 responses to “Friday data center tidbits: ghost servers. #datacenter outages, and more!

  1. Just a side note on how inefficient data centers are, largely as a result of people losing track of what’s in them. I spent some time analysing the age of kit in some large enterprise DCs and was somewhat disappointed to find that typically 25% of the space and power consumed by a DC was for kit that was >4years old (ie 10% or less of the performance of current kit).

    One thing that bugs me on green IT is moving gifs: these chew up large chunks of power on users laptops, but don’t get accounted for in the green credentials of IT shops.

    • I think there’s a tendency toward momentum, if it’s in place and doing the job, leave it alone. On the other hand, I have gear in my own data center that is over 4 years old, with appropriate upgrades, it’s actually quite efficient at doing the work. Maybe not space efficient, but I’m not willing to part with the cash to condense space (yet anyways).

      Good point on the effect that the site design has on the end user computer power load. I think that might make a worthy article :).

      Vern

      • Old kit can be much more expensive than it appears. The space/compute/store/bandwidth unit roughly follows Moores Law, the power /unit is not quite so aggressive, but, depending on how you account for capital vs operational costs, you can reduce your power consumption by, say 50-80% on that 25%, which is significant from a sustainability perspective (although the new machines may cost more energy to build than they’d use over their life). There are also often significant software licence costs, eg Oracle is priced per cpu and so favours few, large boxes with fast cpus (at least for the test machines). Old kit is usually more expensive to administer: each doubling of the types of machines would typically increase unit labour costs by 20%. If you’ve got very few of a particular type this makes them expensive to own (but probably not very obvious from the budget reporting).

        Tech refresh is normally a bit of a mess, but I concluded that the variation of kit is one of the main drivers of the cost of ownership that James Hamilton measured at 5-7x for the Google/MS estates vs normal enterprise DCs.

        I calculated that the moving gifs on my wife’s yahoo email account had a larger carbon footprint than her consumption of tea šŸ˜‰ The carbon footprint of most systems is dominated by the desktop devices, so any real commitment ought to measure and manage the variable elements of the desktop costs.

      • I could agree with that under certain conditions. The problem is, at least under my public cloud environment, CPU isn’t the issue on the cloud hosts. memory is. The vendors are all pushing expensive high core count servers that I could never cram enough memory into to use all those cores efficiently (without wheelbarrow loads of $100 bills at least). The tradeoff just isn’t there to run an expensive server power efficiently but workload inefficiently, at least for me.

        An homogeneous hardware environment is always a plus when possible. My server herd contains a few different manufacturers, but is 100% identical in terms of the actual guts (dual Opteron, same SATA controller, SATA drives, same video, same Broadcom ethernet ports). Of course, if something gets too far out to support, I can always deep six it easier, since there isn’t a ton of money tied up in it.

        Vern

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s