Tag Archives: Mike Manos

Thursday data center tidbits: #cloudcomputing definition (again), stinky math, and more!


First up today is a piece asking if cloud computing could get any more confusing. Sure, if you keep over complicating it. Self provisioning, metered service, an insistence that cloud services can only be provisioned uniformly and not customized (take it or leave it), may be found in cloud computing variants but are NOT required parts of cloud computing.

So what is cloud computing in a nutshell. Virtualization, with the capability to integrate and manage a pool of physical and virtual resources as if they were one rather than many and multitenant use of the same physical resources. That’s it. Any argument over cloud billing or service models being “cloudier” than others is foolish.

Next up is the claim from Mike Manos that carbon regulation is going to be disruptive to data centers. Um, yup, we already knew that (why do you think data centers are working like the devil to reduce their carbon footprints?)(besides lower costs). I do have to say that comparing this to Y2K is questionable, since Y2K resulted in a miniscule number of problems, well out of proportion to the Armageddon hype.

Finally, there’s some math to consider on the HP study about using manure to power a data center. 10,000 cows=1000 servers. That’s less than half of one data center server container. 25,000 cows=1 data center server container. How many containers in Microsoft’s giant Chicago container data center? Not exactly the most practical idea in the world.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

swiftwater telecom rcs cloud computing logo

Mike Manos was scared by a data center engineer when he was small.


That’s the only way I can classify another “evil data center engineer” manifesto and here.

The premise of this one, as well as the previous push for container data centers (I thought containers were going to be the solution to everything, Mike?), is that data center engineers have an incentive to build one off facilities full of Rube Goldberg complexity that no one but them can understand (the black box) in order to hide that they’re soaking the customer for more money. This is like going to a garage and having a mechanic do a bunch of work on your car and just saying “trust me”.

I’m sure there are some firms out there operating this way, but this theory has a couple of problems. Trading short term monetary gains for making the customer unhappy with you or damaging the customer economically is foolish. The first is going to result in no more work and a hit to your reputation, the second at the very results in a customer that can’t afford to give you any more work.

The next point is, what good does it do to create a system that only you can figure out or maintain if it turns into a nightmare to take care of yourself? Once again, angry customer, economic damage all around. This is exactly incentive NOT to do these things if you care about the survival of your business past tomorrow.

This “open source data center” idea is really no more than a decent data center engineer would be doing in the first place. Build the most efficient and cost effective facility with any and all available technologies to meet 100% of the customer’s needs. Anything else damages you as much as it does the customer. This is just simple economics 101.

Do these things and you won’t become part of the “black box conspiracy”.

Email or call me or visit the SwiftWater Telecom web site today for data center engineering sans black boxes.

Vern

swiftwater telecom rcs cloud computing logo

Monday data center tidbits.


Today I was reading about whether private virtual computing clouds are the “future” or not. This is a ridiculous debate. Private computing clouds are just another tool of many. Some things and some situations will fit them, some won’t. It’s like arguing if hammers are the future of tools. Use the right tool for the job.

Vern

Wednesday data center tidbits.


Tonight I’ve been reading about Mike Manos on data center site selection. I find it interesting that the story talks about how green data centers need to be designed for the specific site when Mike Manos has been very vocal about trading the “cottage industry” of custom designed data centers for mass produced cookie cutter “Lego” container data centers. Somehow it doesn’t quite match up.

Vern, SwiftWater Telecom

Cookie cutter container data centers, take 2


Some time ago, I wrote about a different perspective on container data centers. Tonight I’ve been reading about Mike Manos’s take on containers and I’d like to talk a little bit about some of his points.

Mike thinks that data centers have been moving at an evolutionary snail’s pace. Green power, free air cooling, green cooling techniques in general, DC power distribution, I think the pace of evolution is hopping right along and I sure don’t think moving to pre-built modular Lego data centers is going to spark that.

Mike says that “In the past, each and every data center was built lovingly by hand by a team of master craftsmen and data center artisans. Each is a one of a kind tool built to solve a set of problems. ”
This is true, because the people that build this way recognize that “one size fits all” fits everything (sort of) and fits nothing optimally.

Data centers are not nail hammers. Nail hammers generally work equally as well to drive a nail whether they were handmade by a blacksmith or mass produced in a factory. A nail hammer works the same in the tropics, at the North Pole, or on the moon. Cookie cutter data centers may leave you driving tacks with a sledgehammer or driving spikes with a flyswatter. Myself, I’d prefer not to deal with either.

Show me a container data center that can deal with such green techniques as free air cooling or DC power. I’m not aware of either and these are two of the hottest green techniques going.

The watchword for container data centers seems to be simplicity. It’s true, you can dumb down the data center so things consist of nothing except unplugging and plugging huge boxes. Just hope the vendor did all their engineering right because you’re not going to be able to tweak anything once it’s plugged in (not to mention you won’t even have a good sense if things are working the best they can be or not).

So, are containers all bad? Not at all. Containers are useful for huge installations with massive amounts of identical servers. Maybe you’re willing to trade off optimization for only needing 4 people to install a 2000 server container. It’s certainly handy for Microsoft to run a single purpose 700,000 sq ft data center with only 45 people.

On the other hand, if you need flexibility to handle different types of hardware and new technologies, to truly be green, and to be as optimal as possible for your particular set of needs, a Lego data center isn’t going to cut it.

Remember, no sledgehammers on tacks.

Vern, SwiftWater Telecom

data center facility engineering