Tag Archives: HP

Thursday data center tidbits: #cloudcomputing definition (again), stinky math, and more!

First up today is a piece asking if cloud computing could get any more confusing. Sure, if you keep over complicating it. Self provisioning, metered service, an insistence that cloud services can only be provisioned uniformly and not customized (take it or leave it), may be found in cloud computing variants but are NOT required parts of cloud computing.

So what is cloud computing in a nutshell. Virtualization, with the capability to integrate and manage a pool of physical and virtual resources as if they were one rather than many and multitenant use of the same physical resources. That’s it. Any argument over cloud billing or service models being “cloudier” than others is foolish.

Next up is the claim from Mike Manos that carbon regulation is going to be disruptive to data centers. Um, yup, we already knew that (why do you think data centers are working like the devil to reduce their carbon footprints?)(besides lower costs). I do have to say that comparing this to Y2K is questionable, since Y2K resulted in a miniscule number of problems, well out of proportion to the Armageddon hype.

Finally, there’s some math to consider on the HP study about using manure to power a data center. 10,000 cows=1000 servers. That’s less than half of one data center server container. 25,000 cows=1 data center server container. How many containers in Microsoft’s giant Chicago container data center? Not exactly the most practical idea in the world.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.


swiftwater telecom rcs cloud computing logo


Wednesday data center tidbits: HP, manure, #cloud computing, and more!

First up today is reading about HP doing research on using methane from cow manure to power a data center. I’ve seen HP turn things brown and smelly before, but at least they’re admitting it this time. I can just see the slogan for this one, “BROWN is the new GREEN!”.

From the same piece, we get:

“I don’t think you want to ship the manure to Silicon Valley.”

I think that would be slightly redundant.

The next piece is about cloud computing providers failing to get the message across to small to medium enterprises. The telling quote here is this:

“The UK market seems to be confused by jargon and synonymous terminology and appears to have been susceptible to scaremongering by on-premise providers.”

There we go, spread enough doubt and confusion and you can scare anyone away from anything.

Finally we have the piece on building ROI from cloud computing. Under the section “What can I not put in the cloud?”, we get:

“Answers typically include UNIX systems, mainframes …”

Unix can’t go on a cloud? Considering that there are a number of open source Unixes that are perfectly happy on a cloud, I’d call this one a miss. As for mainframes, slap an emulator on a Linux virtual machine and your 1960’s mainframe app can live on a cloud too!

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.


swiftwater telecom rcs cloud computing logo

Friday data center tidbits.

First thing up today is the video tour of the new HP Wynyard data center. Lots of great ideas here, one questionable one. The idea of using light colored cabinets to reduce the requirement for lighting is a great one that gets ignored commonly and the beautifully neat overhead cable racks make working on cabling a snap. The questionable is the cold air flow. Why expend extra energy pumping the air in the opposite direction it naturally wants to go?

Most unfortunate headline of the week: “Could Tiger Woods render networks impotent today?“. I have no idea what I could possibly add to that one.

Call or email me or visit the SwiftWater Telecom web site for great green data center and cloud computing services minus the hype.


swiftwater telecom rcs cloud computing logo

Sunday data center tidbits.

I was just reading about data center temperatures and server failures. Holy cats, a 60deg F internal temperature difference between servers? This is a graphic example of the importance of airflow distribution, doesn’t matter how much cooling capacity you have, if you don’t get it to the right place, the results aren’t going to be pretty!

The hype on the HP Wynyard data center has officially jumped the shark. We’ve gone from “wind cooled” to “glacial wind cooled”, ignoring the fact that the facility isn’t cooled by “wind” at all, not to mention it can’t be 100% free air cooled, still requiring chillers for certain times of year (also not to mention that this is NOT the first free air cooled data center either). Not taking anything away from HP, but this is getting ridiculous.

Call or email me or visit the SwiftWater Telecom web site for green data center and cloud computing services minus the hype.


swiftwater telecom rcs cloud computing logo

Thursday data center tidbits.

Today’s piece of egregious greenwash comes from the story about HP’s “100% wind cooled” data center and other sources. The first part of the story says “It is entirely air-cooled..”, then we get “but the facility still needed traditional chillers for those occasions” and “Installing chillers in addition to building the natural air cooling…”. Doesn’t anyone check to make sure what they’re writing makes sense? I’m not taking anything away from an admittedly impressive facility, but, if there is legacy cooling present and it has to be used, the facility is NOT “entirely air-cooled” or “100% wind cooled”. If you want to hype something, avoid contradicting yourself in the same article.

Call or email me or visit us at SwiftWater Telecom for hype free green data center services or cloud computing services.


swiftwater telecom rcs cloud computing logo

The new data center, integrated hardware and software, really?

I was reading a post about the Microsoft and HP “cloud pact”, integrating Microsoft software and HP hardware (and here as well). Are we really this adverse to setting up our own servers?

The point of the “cloud pact” appears to be that people would rather pay Microsoft and HP for the privilege of being wedged into neat little boxes of preloaded software configurations on someone elses idea of what you need for hardware. Get the box in, throw it in and go or ship it off to your data center co-location, no messing around required.

The article claims that this is a good idea because servers are so difficult to deploy. Really? Maybe it is so for Microsoft stuff. I can go from bare metal to a working Xen Cloud Platform host in 10 minutes or less. I can set up a brand new CentOS virtual with DNS, email, control panel, and LAMP stack in about 45 minutes total with about 5 minutes of my own time required. This is “difficult”?

Now, for the ease of a “one button push server”, you get to be locked into not only a software vendor but the hardware and software combination. Better make sure you love both MS and HP, you’re going to be handcuffed to them for a long time.

I guess I’m a bit old fashioned. I’ve never been so afraid of the “difficulty of deploying servers” that I ever contemplated giving up the flexibility of matching my own choice of hardware, software, and configuration to meet the changing needs of my customers, not to mention staying up on constantly changing technologies.

It would be more difficult to swallow this deal.


swiftwater telecom logo