Tag Archives: data center efficiency

Building out the data center the right way.


Tonight I’ve been reading an article about data center building trends. There’s some very good points to this and also some things that I think are very wrong. These also explain some things that have mystified me for some time.

Looking ahead 5 years for capacity planning isn’t a bad idea (except that the data center needs to be flexible enough to accommodate the changes that can happen in 5 years), but the whole decision on build out or not for data center infrastructure in advance hinges on the penalty for doing so. In short, there’s no penalty for building out passive infrastructure and a big penalty for building out active infrastructure.

I’ve been mystified by the idea that data center PUE (power usage effectiveness) only gets good when a data center is full. Now I understand, this is based on the idea of a data center building out (and operating) 100% of it’s cooling infrastructure in advance. If you’re only running 20% of your 5 year forecasted server capacity but you have to run 100% of your 5 year forecasted cooling capacity because it’s a monolithic system that’s either on or off, of course your efficiency is going to stink!

The PUE numbers for that kind of arrangement would be pathetic. Of course, as you add servers with the same amount of cooling power usage, the PUE would gradually get better, but who in the world would really WANT to run something that way? (Reference last year’s story about UPS cutting their data center power by switching off 30 out of 65 HVAC units!)

Leaving plenty of extra room for the transformer yard and the generator yard is a great idea (you can’t expand them once you get things built in around them). On the other hand, it would be silly to build out and power up a yard full of transformers that were sitting there doing nothing except chewing up power.

So, what sort of data center infrastructure things can safely be built out far ahead of the actual need? The physical building is a bit of a no-brainer, as long as the active sections can be isolated from the idle (you don’t need to be cooling an entire hall that’s only 10% full).

Passive electrical is another good one. This means entrances, disconnects, breaker panels, distribution cabling, transfer switches. No UPS, no DC power plants unless you’re going to be able to shut them off and leave them off until you really need them.

Passive cooling infrastructure such as ducts. Take a lesson from UPS, do NOT build out double the HVAC you need and run it all!

Finally, build out the support structures for the transformer and generator yards. Mounting pads, conduit, cabling, so the equipment is all set to drop, hook up, and go.

Don’t kill your data center’s efficiency in return for capacity 5 years from now.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Monday data center tidbits: generator smog and misusing PUE (again)


First up is the piece about Quincy WA dithering about the environmental impact of data center backup generators. I’ve got news for someone, if you’re running the generators enough to cause a blip in the health of the local area, you probably should be considering a different location because that means the commercial power is WAY bad.

Next piece is about Digital Realty talking about PUE and efficiency. The comment of note here is that they determined the initial PUE from the load bank testing. How in heck could you get a meaningful PUE from load bank testing? Sure, you’re consuming power, generating heat, and removing heat, but the airflow in the data center is not even going to be close to the real IT equipment environment. The fact that that wasn’t close to accurate shouldn’t have come as any surprise.

Finally, from the piece above, we also get the piece of wisdom that you only get good PUE from fully loaded data centers. My own data center, currently loaded at 10% (anyone looking for data center capacity?) averages 1.2 and, with the kick off of fall, we’re reusing 100% of the waste heat to heat the rest of the building.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Monday data center tidbits: #cloudcomputing liability, crippling servers to cool


First up today is the piece about cloud computing provider liability limits. The real issue here isn’t about who accepts responsibility for what, it’s paying for the level of liability the customer wants. It seems to me a combination of customers bottom feeding for rock bottom prices and then expecting the provider to take on a liability commitment that could easily bankrupt the provider. Low cost, ultra high expectation sounds like a nightmare.

Next up is a piece about comparing colocation to cloud computing. The key here is the potential customer’s server utilization. Without cloud computing, I think it’s very difficult to achieve high utilization of standalone server hardware.

Last up is the story about scaling back server performance to reduce heating load during cooling failures. Aside from the issue of a data center that has repetitive cooling failures and obviously not enough redundancy, I can’t see this as being as effective as powering down some amount of capacity.

Email or call me or visit the SwiftWater Telecom web site for green data center services today.

Vern

swiftwater telecom rcs cloud computing logo

Wednesday data center tidbits: Microsoft’s new “hybrid cloud server”?


First up today is reading about Microsoft’s new Aurora small business server and it doesn’t add up. The article claims “Hybrid Small Business Server, codenamed “Aurora” can run on premises and cloud apps”. How does a local server “run” cloud apps? Here’s a clue: if it’s running on single on premises server (ie not a private cloud), then it’s NOT cloud.

Next up is the piece about Equinix building new data centers with raised floors. I don’t know, maybe their customers are insisting on raised floor because they’re stuck in the old common wisdom that that was the way to do it, but I can’t come up with a single good reason to use raised floors in ANY new data center.

Last up is a piece about putting a data center in a former Model T factory. This is a brilliant reuse of resources. It makes the best possible use of the embodied carbon in the existing building and it reuses a building that, for all it’s age, is perfectly suited for a data center (I run my data center in an 1800’s former New England textile mill). New isn’t necessarily better.

Is your workload a good candidate for #cloudcomputing? Adjust it if it isn’t!


I was just reading an article on what not to virtualize in your data center. I’m going to extend this question to IaaS cloud computing and show why this is the wrong idea.

Cloud computing as a whole and IaaS (infrastructure as a service) gain their efficiency by improving the percentage of utilization on the physical machines running the cloud. What was once a sea of terribly under utilized machines at 5% or less has now become a lean, mean, pack of highly efficient servers at 70%-90% utilization.

Unfortunately, some workloads have also pursued high efficiency (conventional) server configurations. This leads to a condition where the current standalone server isn’t a good candidate (in it’s current configuration) to place in a cloud environment, due to the fact that it may exceed the resources provided by the cloud virtual machine. So, does this mean we can’t cloud these workloads? Not at all.

Doing this means that we have to take a bit of a step backward. Instead of the entire workload crammed into one machine, which was the most efficient way to use a standalone server, clouding this workload means splitting it up.

The example given in the article is a high load web server that sucks up a standalone physical server’s entire I/O capacity. Instead of trying to cloud a clone of that, simply cloud a bunch of clones and load balance them.

Spreading out the work this way allows the cloud to efficiently spread the work out on it’s physical machines. This also gains you the easy ability to adjust for changes in demand. Now, instead of trying to cram more resources in a single (virtual) machine, simply create another clone and add it to the load balancer when you need more capacity and delete a clone when you need less. Also, you gain the big advantages of fault tolerance of the cloud, as well as the easy ability to run clones of the web server on clouds in different data centers. Now you have a level of fault tolerance and disaster recovery that your old standalone server could never match.

The end result here is that there is almost nothing that can’t fit in the cloud if you can break yourself from the current common wisdom that condensing things is best. Spread out, diffuse, and stop shoehorning things into tight packages.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

Friday data center tidbits: the cloud industrial revolution and more.


First up today is a piece likening the move to cloud computing to the change from steam power to electricity. It sure is, complete with the same re-occurring bunch of Luddites trying to derail it. Funny how history repeats itself.

Next up is the piece about saving money by building multiple data center tiers in the same facility. First off, I don’t know how this qualifies as “greening” the data center. Second, avoiding under-building or over-building the data center to the needs of what’s going in it is news?

Finally, there’s the speculation that Twitter’s new data center will put paid to the Fail Whale. I doubt it, but now there won’t be any guess work to who’s REALLY at fault when it blows up.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

Growing the cloud. #cloudcomputing


We’re growing the cloud again! SwiftWater Telecom is pleased to announce the latest features and products for our cloud computing service!

Self provisioning: Just fill out a simple form to register as a customer, place your order online, and your virtual server is automatically created for you, fast and easy!

Control panel: With our new user control panel, you’re in full control of your virtual server. Start, stop, reboot, add features, statistics, control of your service is yours!

More supported systems: In addition to CentOS (5.5) Linux, we now offer Ubuntu Lucid Lynx (10.04)(server and desktop variants) and Debian Lenny (5.0.5) Linux as well! Don’t see your operating system here? Ask us to add it!

Add to this watchdog services that keep your virtual server up and running as well as powerful packages (high reliability web site, business server, etc) combined with multiple green energy efficient data centers and you have an unbeatable combination!

Visit us to learn more about cloud computing or to get started and place an order today!

Vern

swiftwater telecom rcs cloud computing logo