Tag Archives: container data center

Thursday data center tidbits: #cloudcomputing definition (again), stinky math, and more!


First up today is a piece asking if cloud computing could get any more confusing. Sure, if you keep over complicating it. Self provisioning, metered service, an insistence that cloud services can only be provisioned uniformly and not customized (take it or leave it), may be found in cloud computing variants but are NOT required parts of cloud computing.

So what is cloud computing in a nutshell. Virtualization, with the capability to integrate and manage a pool of physical and virtual resources as if they were one rather than many and multitenant use of the same physical resources. That’s it. Any argument over cloud billing or service models being “cloudier” than others is foolish.

Next up is the claim from Mike Manos that carbon regulation is going to be disruptive to data centers. Um, yup, we already knew that (why do you think data centers are working like the devil to reduce their carbon footprints?)(besides lower costs). I do have to say that comparing this to Y2K is questionable, since Y2K resulted in a miniscule number of problems, well out of proportion to the Armageddon hype.

Finally, there’s some math to consider on the HP study about using manure to power a data center. 10,000 cows=1000 servers. That’s less than half of one data center server container. 25,000 cows=1 data center server container. How many containers in Microsoft’s giant Chicago container data center? Not exactly the most practical idea in the world.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

swiftwater telecom rcs cloud computing logo

Data center heros of 2009.


After doing the data center bozos of 2009, I thought it would be appropriate to recognize data center industry players who have provided a positive example to follow.

1. Microsoft for building it’s 700,000 sq ft Chicago data center with both container and raised floor space. This highlights the benefits of the new container format while acknowledging that containers aren’t appropriate for all circumstances.

2. Google and D-Wave for bring sci-fi to life with quantum computing.

3. Best re-use of the strangest structure, the CLUMEQ supercomputer built in a former particle accelerator silo. Now if they can just get the airflow going the right way, it would be absolutely awesome.

4. The DoD for lending legitimacy to cloud computing.

5. Best data center green “2 fer”,i/o Data Centers Phoenix ONE for covering an absolutely massive roof in solar arrays. Generate power and keep the roof (and by extension the facility) cooler and you have a perfect 2 birds with one stone.

6-10. A major shout out to everyone who managed to run a data center cloud this year without fouling it up!

Vern, SwiftWater Telecom

Thursday data center tidbits.


I just browsed across a post on evaporative cooling in Microsoft’s new data center containers. I’m not sure it’s the best idea to be humidifying the servers this way, and I’m darned sure it won’t be a good idea when something flips out and the humidity condenses inside this large steel box.

Best example of how NOT to do a planned outage. Burlington Coat Factory pulled the plug on it’s entire ecommerce site for 28 hours for hardware and software upgrades. No warning, no meaningful explanation during, just off the air. Just proves that anyone can serve as a bad example.

Coming up on Monday, I’ll be doing a post series on the engineering, construction, and launch of our new data center cloud!

Vern, SwiftWater Telecom

virtual private servers
virtual private workstations

The green data center and container lessons.


Tonight I’ve been reading up on Microsoft standardizing their data center containers”. Whether you adhere to the container philosphy or not, there are a number of things that everyone can take from Microsoft’s modular data centers.

The concept of modular data centers is simple. Standardize everything like a pile of Legos, take advantage of bulk purchasing, and do as much work as possible with the minimum number of people possible. It’s certainly an attractive proposition for huge companies such as Microsoft that need huge numbers of identical servers. If you don’t need 10s of thousands of identical servers or you want to optimize your facility to the max for the conditions for the local environment, probably not to most practical setup. It’s even worth noting that Microsoft’s massive container data center in Chicago also has a floor of conventional facility as well.

There are some things that anyone can learn from these, whether you build a container data center or not. The first idea is space efficiency. These containers remind me of Navy submarines. Severely space constrained so everything is packed in as efficiently as possible. When things are that tight, there’s no room for sloppy. This is worth keeping in mind even when you have a data center with more elbow room.

The second point is airflow efficiency. Using free air cooling in such a confined space demands absolute performance, maximum airflow and minimum turbulence. Get sloppy, leave cables hanging around, and it’s going to impact pronto.

Typical data centers lack a lot in precision of airflow. The matching half to free air cooling is precise air flow control. These two things together make a massive difference in cooling efficiency, especially if you’re using the natural movement of the air. Turbulence is evil.

The Microsoft containers are each a perfect example in minature of the things that need to be done to green any data center. Show the same attention to detail and follow these examples and your data center can be a greener shade of green too.

Vern, SwiftWater Telecom

Friday data center tidbits.


I was just reading about new servers that have the motherboard immersed in liquid coolant. I’m waiting for a few things to show up, such as an astronomical price tag for entirely proprietary hardware (I bet you could buy blood cheaper than this special inert coolant).

Next on the odd list is a proposal for automated robotic racking and unracking of servers. Just when you thought container data centers were the hottest thing, they’ll be obsolete because you can’t squeeze Johnny 5 through one.

Vern, SwiftWater Telecom

green dc power engineering

Power stability, containers, and the data center cloud …


Today’s post comes from reading about the Rackspace cloud power outage with some more details here. Ready for this to get worse?

First, let’s look at the cause of the failure. This occurred during a test of phase rotation. What is phase rotation? Simply, it’s the sequence of the phases in 3 phase AC power, A->B->C. Phase rotation is only important in two instances. The first is operating 3 phase electric motors (reverse two phases and the motor runs backwards). Second is if you’re going to sync a generator to the commercial AC power grid. Phase rotation has no impact at all on the operation of distribution transformers or IT equipment in the data center.

So first of all, they were testing something that would not have had an impact on the equipment it was operating. Second, they apparently were manually testing in live electrical enclosures. If you’re really concerned about phase rotation, invest in an inexpensive phase monitor and arrange it to alarm automatically (phase monitors are commonly used on things like elevators to prevent sudden motor reversals if the power company messes up the phase order). Manual testing on live electrical, especially live electrical without redundancy, is begging for disaster.

So, now that we know it was a disaster that didn’t have to be (as are most of the data center power outages I’ve seen recently), how can we expect this to get worse? The increasing use of containerized servers for cloud services concentrates the choke points for infrastructure. Where a goof in a traditional type data center may take down anything from a couple of servers to a couple of racks, you can now blow 2000+ servers out of the water at once. This is ok for someone like Google that plans to have thousands of servers down at any one time without impact. Anyone else who isn’t massively overbuilt is going to have a serious problem.

So what are the takeaways from this? Don’t muck around with live power unless absolutely necessary, permanent sensors are cheaper than catastrophes, act to prevent cascading failures before they happen, and don’t sneeze on containers.

Vern, SwiftWater Telecom

data center facility engineering

Tuesday morning data center tidbits …


Rackspace manages to kill part of their cloud during “routine testing of PDUs”. Is there nobody out there who can do data center power competently anymore? Watch your cloud evaporate when the lights go out.

Watch for today’s upcoming article on power reliability, the cloud, and container data centers!

Vern, SwiftWater Telecom