Tag Archives: cloud computing

Gartner and cloud computing, take 2.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I just got through reading a post by Lydia Leong about the “impurity” of cloud computing defending the Gartner “Magic Quadrant” for “cloud computing IAAS and web hosting”. Where in the world do I start on this.

It’s certainly true that the same customers that want cloud computing services may want classic data center services as well, such as co-location or dedicated physical servers. It’s also probably true that providers that offer a broader range of both classic data center services and cloud computing services may be stronger as a business because of the flexibility offered by having a broader portfolio of services available.

The problem is, what we have here is an attempt to mix a wild collection of things together as being “one market”. How can you lump the suitability of a provider to host a web site with a provider to host an outsourcing of an entire enterprise data center? I was wrong in my post yesterday about the MQ, it’s not clueless, it’s schizophrenic from attempting to combine too many incompatible requirements together as “one market”. This gives you the odd result of penalizing the 900 pound gorilla of the cloud computing market.

The other problem is that the disparate elements that appear to be blended into this mess simply aren’t found in the title. Outsourcing an entire enterprise data center isn’t covered by the title, requiring dedicated private servers for non-web hosting purposes isn’t covered. They may be other things that customers who want cloud computing or web hosting services might want, but they can’t all be welded together in one big Frankenstein monster with that title. Putting a correct title on this that reflects what it really contains won’t help the schizophrenia but it would be more honest.

This thing is a mess, no matter how you slice it.

Advertisements

Gartner: clueless in the cloud.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I’ve just been reading about the uproar over the Gartner “Magic Quadrant” for cloud computing infrastructure. I think they need some remedial education before they pass themselves off as cloud computing pundits.

Defining “cloud computing” precisely is something that people have been squabbling over quite some time now. This is because most of the people squabbling can’t separate the core characteristics that truly define cloud computing from all the little details that define the many flavors of it.

Even though people tend to disagree on what cloud computing really is, it’s pretty clear what it is NOT. It isn’t “classic” data center services. This is what had me shaking my head over Gartner declaring Amazon weakness because they don’t offer co-location, or dedicated private physical servers.

Having started as a classic data center provider here, SwiftWater Telecom, my own operation, provides both classic data center services such as co-location AND cloud computing services and the combination of these things gives us more flexibility to meet customer’s needs through combination. On the other hand, the classic side of the coin isn’t a weakness or a strength of the cloud computing side. It just means I have a wider range of tools to satisfy more customer needs.

After previously having gone around with Lydia Leong of Gartner about a hairbrained suggestion to chop public clouds into private ones (“Cloud computing: another ridiculous proposal.”), I can only conclude that they only have enough handle on cloud computing to be dangerous.

Trying to mix a requirement for traditional data center offerings in to the equation when
when the subject is supposed to be “cloud computing infrastructure” is the most clueless thing I’ve seen in quite a while.

What you REALLY should be asking your cloud computing provider.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I’ve been reading quite a bit of back and forth over cloud computing provider Service Level Agreements and endless checklists that purport to be everything a customer should ask a potential cloud computing provider. There’s a lot of aimless noise and sniping back and forth that is missing a very critical point.

Let me start of by saying, as an approach to insuring service uptime, classic percentage based SLAs are worthless. The SLA tells you nothing about whether you can expect your cloud computing provider to keep your service running or not, only the penalty if they fail. Your goal as a cloud computing customer isn’t to extract the maximum compensation for the failure of your cloud service, it’s to insure that your cloud service doesn’t fail in the first place!

Failures of data center hardware are an inevitable fact of life, even with the most conscientious engineering and operation. The data center failures of the past year have shown that even the big data centers fall well short of the “most conscientious engineering and operation” goal. Given this fact of life, here are the things you should REALLY be asking your cloud computing provider.

1. Do they have the ability to automatically restart workloads on the remaining running physical cloud capacity if part of the capacity fails.

This is something that has stood out like a sore thumb with a lot of the big cloud providers. Failures of underlying physical hardware on Amazon’s AWS service kills the workloads that were running on that hardware, even though the vast amount of capacity is still up and running in the cloud. If your cloud provider can’t automatically restart your workload in case of failure, run away.

In the SwiftWater Telecom cloud (using the Xen Cloud Control System) failed workloads are restarted automatically on remaining cloud capacity in minutes, not hours.

2. Do they offer a way to insure that redundant virtual servers never end up on the same physical server?

It doesn’t do much good to have redundant virtual servers in the cloud if they all die when one single physical host dies.

In the SwiftWater Telecom cloud, we offer what we call the Ultra Server. The Ultra Server combines redundant virtual servers with the “unfriend” feature of XCCS. Unfriending insures that the redundant servers never end up on the same physical server.

This means that nothing but a “smoking hole disaster” would ever cause a complete outage of an Ultra Server. Combine that with the automatic virtual server restoration of XCCS and the option to set up the Ultra Server between our separate data centers, and you have a cloud powered virtual server that is the next best thing to indestructible.

Stop concentrating on the penalties for failure, ask the right questions to find the cloud computing provider who has the right tools to keep your service from failing in the first place.

A green networking idea for cloud computing and data center.


Vern Burke, SwiftWater Telecom
Biddeford, Maine

After commenting on the piece questioning whether cloud computing really saves energy, I came up with an idea that might simplify the whole energy cost of the network issue. Yup, it’s another metric! 🙂

The issue here is, how do you account for the energy cost of transferring data across the Internet to a cloud computing provider? After all, consider the massive amount of networking devices involved in the Internet and that Internet routing protocols are geared towards bandwidth efficiency rather than energy efficiency.

So, why not create an energy based routing protocol? For the sake of discussion, I’ll call this OGPF (Open Greenest Path First). Unlike OSPF (Open Shortest Path First) which looks for the shortest way from point A to point B using weighted “length” metrics, OGPF could use an energy metric for every device in the path, possibly including how the device is powered.

Now you have a network that’s automatically as green as possible and you have a way to at least get a measurable idea of what’s going on across a network wide link.

Will cloud computing actually save energy? Overcomplicating it.


Vern Burke, SwiftWater Telecom
Biddeford, Maine

I just read a piece discussing the debate over whether cloud computing actually saves energy or not. This piece shows that you can get silly about trying to quantify the unquantifiable.

The argument is really very simple here. The idea is that the energy cost of the networking equipment required to move data back and forth to a cloud computing provider offsets the energy gain from an end customer dumping an inefficient local data center in favor of cloud computing. If the original claim of cloud computing energy saving is over simplified, so is the argument against it.

First, any claim about the energy cost of the network would require a detailed analysis of how much energy is required to move a bit of data from point A to point B. Good luck trying to account for every single piece of power consuming electronic equipment in any path through the Internet. Good luck trying to account for every single piece of power consuming electronic equipment in every POSSIBLE path through the Internet that the data could take. Remember, you not only have to account for the networking equipment but also the communications carrier’s facilities that provide the link.

Second, it’s easier to make this claim when you have a local data center that’s only supporting local users. Add in remote users or add in a public facing website and now you’re generating cross Internet traffic from nearly anywhere. Take the first issue and multiply it thousands of times.

In reality, the energy cost of moving data is pretty low compared the 95% under utilization rate commonly found in small local data centers. Wasting that much energy leaves a LOT of room to absorb the energy needed to ship bits across the network.

Add to this the massive economies of scale involved in cloud computing and I’d be hard pressed to see a situation where the energy cost of the network outweighed the energy saving of cloud computing.

Cloud computing: another ridiculous proposal.


Vern Burke, SwiftWater Telecom
Biddeford, ME

I was just reading a piece about what keeps companies off of public cloud computing services. How much do you have to bastardize a cloud before it isn’t cloud computing anymore?

This issue is the same old cloud computing security red herring. People will share network bandwidth and storage space just fine, but there’s just something sinister about sharing the physical compute power. What you end up with is a lot of vague “well it might happen!” with no specifics and no examples where it has happened, despite massive use of public cloud computing services for sometime now.

So what’s the fix for this? If someone orders a single virtual machine, dedicate them an entire physical server so they can be all warm and fuzzy that nobody is close enough to do something malicious to them with those unspecified hypervisor security holes.

Does anyone else see anything wrong with this idea? Sure, you can probably make a cloud automatically carve out a physical chunk of itself for one customer. Unfortunately, when you do that, it’s not “cloud computing” anymore. At the very least, you lose most of the characteristics that make it “cloud” and turn it into a standalone virtualization server or, heaven forbid, effectively an old fashioned standalone dedicated server. It just simply isn’t cloud anymore.

Do this and now you lose the utilization and energy efficiency that make cloud computing a much more cost effective proposition. Is the customer with one virtual machine going to be willing to pay the cost of a dedicated server for the privilege of segregating a physical cloud host for their private use or are they going to expect cloud pricing because you’re still calling it “cloud”? If they’re paying dedicated price for a single virtual machine and their own private physical server, why not just do a separate dedicated server?

The one sole advantage that this idea has is automation. It would certainly be easier to dedicate and undedicate a physical server in a cloud environment. On the other hand, this would be likely to be an operations nightmare, trying to keep enough excess cloud servers available to accommodate losing an entire physical server when you provisioned a single VM. Of course, keeping that much excess capacity running also kills the heck out of your efficiency and, by extension, your costs.

Nice try but this idea is a swing and a whiff.

Thursday data center tidbits: arguing against virtualization, NASA data goes free range


Vern Burke, SwiftWater Telecom
Biddeford, ME

First up today is a question I ran across asking if there are any circumstances that argue against server virtualization in the data center. The only thing I can think of is the requirement for specialized hardware. If your applications require specialized hardware, keep them on their own server, otherwise, get virtualizing!

The next piece is about NASA discovering that they’ve been selling excess PCs without properly cleaning sensitive data off them. And everyone is in a panic over cloud computing being a tremendous security threat? The biggest security threat is leaving your data in the care of people who have no focus on the security of it.

Data center doof of the week goes to Tumblr for managing to kill a database cluster AND kill their network for a day for the sake of scheduled maintenance. Squeaky red noses are enroute!