Tag Archives: container data center

Thursday data center tidbits: #cloudcomputing definition (again), stinky math, and more!


First up today is a piece asking if cloud computing could get any more confusing. Sure, if you keep over complicating it. Self provisioning, metered service, an insistence that cloud services can only be provisioned uniformly and not customized (take it or leave it), may be found in cloud computing variants but are NOT required parts of cloud computing.

So what is cloud computing in a nutshell. Virtualization, with the capability to integrate and manage a pool of physical and virtual resources as if they were one rather than many and multitenant use of the same physical resources. That’s it. Any argument over cloud billing or service models being “cloudier” than others is foolish.

Next up is the claim from Mike Manos that carbon regulation is going to be disruptive to data centers. Um, yup, we already knew that (why do you think data centers are working like the devil to reduce their carbon footprints?)(besides lower costs). I do have to say that comparing this to Y2K is questionable, since Y2K resulted in a miniscule number of problems, well out of proportion to the Armageddon hype.

Finally, there’s some math to consider on the HP study about using manure to power a data center. 10,000 cows=1000 servers. That’s less than half of one data center server container. 25,000 cows=1 data center server container. How many containers in Microsoft’s giant Chicago container data center? Not exactly the most practical idea in the world.

Email or call me or visit the SwiftWater Telecom web site for cloud computing services.

Vern

swiftwater telecom rcs cloud computing logo

Data center heros of 2009.


After doing the data center bozos of 2009, I thought it would be appropriate to recognize data center industry players who have provided a positive example to follow.

1. Microsoft for building it’s 700,000 sq ft Chicago data center with both container and raised floor space. This highlights the benefits of the new container format while acknowledging that containers aren’t appropriate for all circumstances.

2. Google and D-Wave for bring sci-fi to life with quantum computing.

3. Best re-use of the strangest structure, the CLUMEQ supercomputer built in a former particle accelerator silo. Now if they can just get the airflow going the right way, it would be absolutely awesome.

4. The DoD for lending legitimacy to cloud computing.

5. Best data center green “2 fer”,i/o Data Centers Phoenix ONE for covering an absolutely massive roof in solar arrays. Generate power and keep the roof (and by extension the facility) cooler and you have a perfect 2 birds with one stone.

6-10. A major shout out to everyone who managed to run a data center cloud this year without fouling it up!

Vern, SwiftWater Telecom

Thursday data center tidbits.


I just browsed across a post on evaporative cooling in Microsoft’s new data center containers. I’m not sure it’s the best idea to be humidifying the servers this way, and I’m darned sure it won’t be a good idea when something flips out and the humidity condenses inside this large steel box.

Best example of how NOT to do a planned outage. Burlington Coat Factory pulled the plug on it’s entire ecommerce site for 28 hours for hardware and software upgrades. No warning, no meaningful explanation during, just off the air. Just proves that anyone can serve as a bad example.

Coming up on Monday, I’ll be doing a post series on the engineering, construction, and launch of our new data center cloud!

Vern, SwiftWater Telecom

virtual private servers
virtual private workstations

The green data center and container lessons.


Tonight I’ve been reading up on Microsoft standardizing their data center containers”. Whether you adhere to the container philosphy or not, there are a number of things that everyone can take from Microsoft’s modular data centers.

The concept of modular data centers is simple. Standardize everything like a pile of Legos, take advantage of bulk purchasing, and do as much work as possible with the minimum number of people possible. It’s certainly an attractive proposition for huge companies such as Microsoft that need huge numbers of identical servers. If you don’t need 10s of thousands of identical servers or you want to optimize your facility to the max for the conditions for the local environment, probably not to most practical setup. It’s even worth noting that Microsoft’s massive container data center in Chicago also has a floor of conventional facility as well.

There are some things that anyone can learn from these, whether you build a container data center or not. The first idea is space efficiency. These containers remind me of Navy submarines. Severely space constrained so everything is packed in as efficiently as possible. When things are that tight, there’s no room for sloppy. This is worth keeping in mind even when you have a data center with more elbow room.

The second point is airflow efficiency. Using free air cooling in such a confined space demands absolute performance, maximum airflow and minimum turbulence. Get sloppy, leave cables hanging around, and it’s going to impact pronto.

Typical data centers lack a lot in precision of airflow. The matching half to free air cooling is precise air flow control. These two things together make a massive difference in cooling efficiency, especially if you’re using the natural movement of the air. Turbulence is evil.

The Microsoft containers are each a perfect example in minature of the things that need to be done to green any data center. Show the same attention to detail and follow these examples and your data center can be a greener shade of green too.

Vern, SwiftWater Telecom

Friday data center tidbits.


I was just reading about new servers that have the motherboard immersed in liquid coolant. I’m waiting for a few things to show up, such as an astronomical price tag for entirely proprietary hardware (I bet you could buy blood cheaper than this special inert coolant).

Next on the odd list is a proposal for automated robotic racking and unracking of servers. Just when you thought container data centers were the hottest thing, they’ll be obsolete because you can’t squeeze Johnny 5 through one.

Vern, SwiftWater Telecom

green dc power engineering

Power stability, containers, and the data center cloud …


Today’s post comes from reading about the Rackspace cloud power outage with some more details here. Ready for this to get worse?

First, let’s look at the cause of the failure. This occurred during a test of phase rotation. What is phase rotation? Simply, it’s the sequence of the phases in 3 phase AC power, A->B->C. Phase rotation is only important in two instances. The first is operating 3 phase electric motors (reverse two phases and the motor runs backwards). Second is if you’re going to sync a generator to the commercial AC power grid. Phase rotation has no impact at all on the operation of distribution transformers or IT equipment in the data center.

So first of all, they were testing something that would not have had an impact on the equipment it was operating. Second, they apparently were manually testing in live electrical enclosures. If you’re really concerned about phase rotation, invest in an inexpensive phase monitor and arrange it to alarm automatically (phase monitors are commonly used on things like elevators to prevent sudden motor reversals if the power company messes up the phase order). Manual testing on live electrical, especially live electrical without redundancy, is begging for disaster.

So, now that we know it was a disaster that didn’t have to be (as are most of the data center power outages I’ve seen recently), how can we expect this to get worse? The increasing use of containerized servers for cloud services concentrates the choke points for infrastructure. Where a goof in a traditional type data center may take down anything from a couple of servers to a couple of racks, you can now blow 2000+ servers out of the water at once. This is ok for someone like Google that plans to have thousands of servers down at any one time without impact. Anyone else who isn’t massively overbuilt is going to have a serious problem.

So what are the takeaways from this? Don’t muck around with live power unless absolutely necessary, permanent sensors are cheaper than catastrophes, act to prevent cascading failures before they happen, and don’t sneeze on containers.

Vern, SwiftWater Telecom

data center facility engineering

Tuesday morning data center tidbits …


Rackspace manages to kill part of their cloud during “routine testing of PDUs”. Is there nobody out there who can do data center power competently anymore? Watch your cloud evaporate when the lights go out.

Watch for today’s upcoming article on power reliability, the cloud, and container data centers!

Vern, SwiftWater Telecom

Monday data center tidbits …


Since giving my thoughts on the container data center on 11/01, I was interested to see these Microsoft container data center pictures today. The interesting thing to me is that the second floor of the facility is all traditional rackmount servers in cabinets on raised floor and that only 2/3 of the servers in the facility will be containerized. Kind of puts a crimp in the idea that anyone who runs anything except container is hopelessly old fashioned (I don’t suspect Microsoft would do the raised floor area if there wasn’t a darned good reason).

Next up is running biodiesel in your backup generator. Not the best idea to risk your reliability by doing this without extensive testing to characterize the behavior of the fuel under all environmental conditions, as well as the potential it might eat your generator.

Last on the list is reading about considerations for deploying SSD disks. At 10X the price of traditional hard drives, it’s not that hard to work these into a server refresh schedule and see a reasonable ROI for them. This is a sharp contrast to the 320GB $7000 PCIe units that I wrote about some time back, which have zero chance of showing a payback before the server is ready for refresh.

Vern, SwiftWater Telecom

Cookie cutter container data centers, take 2


Some time ago, I wrote about a different perspective on container data centers. Tonight I’ve been reading about Mike Manos’s take on containers and I’d like to talk a little bit about some of his points.

Mike thinks that data centers have been moving at an evolutionary snail’s pace. Green power, free air cooling, green cooling techniques in general, DC power distribution, I think the pace of evolution is hopping right along and I sure don’t think moving to pre-built modular Lego data centers is going to spark that.

Mike says that “In the past, each and every data center was built lovingly by hand by a team of master craftsmen and data center artisans. Each is a one of a kind tool built to solve a set of problems. ”
This is true, because the people that build this way recognize that “one size fits all” fits everything (sort of) and fits nothing optimally.

Data centers are not nail hammers. Nail hammers generally work equally as well to drive a nail whether they were handmade by a blacksmith or mass produced in a factory. A nail hammer works the same in the tropics, at the North Pole, or on the moon. Cookie cutter data centers may leave you driving tacks with a sledgehammer or driving spikes with a flyswatter. Myself, I’d prefer not to deal with either.

Show me a container data center that can deal with such green techniques as free air cooling or DC power. I’m not aware of either and these are two of the hottest green techniques going.

The watchword for container data centers seems to be simplicity. It’s true, you can dumb down the data center so things consist of nothing except unplugging and plugging huge boxes. Just hope the vendor did all their engineering right because you’re not going to be able to tweak anything once it’s plugged in (not to mention you won’t even have a good sense if things are working the best they can be or not).

So, are containers all bad? Not at all. Containers are useful for huge installations with massive amounts of identical servers. Maybe you’re willing to trade off optimization for only needing 4 people to install a 2000 server container. It’s certainly handy for Microsoft to run a single purpose 700,000 sq ft data center with only 45 people.

On the other hand, if you need flexibility to handle different types of hardware and new technologies, to truly be green, and to be as optimal as possible for your particular set of needs, a Lego data center isn’t going to cut it.

Remember, no sledgehammers on tacks.

Vern, SwiftWater Telecom

data center facility engineering