From the news of the ridiculous files, we have the story that the US DOE has decided cloud computing providers aren’t fast enough for scientific purposes. Lets see, they didn’t actually test any cloud computing providers (they built their own cloud computing test bed), they used a lot of esoteric hardware no commercial cloud computing provider would have and they’re running loads that normally run on highly specialized and tuned hardware. From this they determine that cloud providers perform badly and they only offer the “illusion of elasticity” (without any details to back that up?)? I think I’ll test the performance of Ford cars by building my own car from scratch and then claiming that Ford doesn’t perform well because my car didn’t perform well. Squeaky red noses to the US DOE.
Next up is the news about Tilera cramming 10,000 CPU cores into a single rack and touting it for cloud computing use. The face that they accomplished this and at low power usage isn’t in question. The problem is, at least for my public cloud, the load isn’t CPU, it’s memory and I don’t see how it could be possible to cram enough memory into this to take any sort of serious advantage of all those CPU cores. I’m filing this one under “barking up the wrong tree”.
Email or call me or visit the SwiftWater Telecom web site for cloud computing services.