Tag Archives: DNS

Friday data center tidbits.


The buzz this morning is about Twitter being “hacked”. Far from the actual site being hacked, this appears to have been a simple case of DNS cache poisoning, a well known vulnerability of older versions of DNS server software. The disturbing thing about this is that Twitter isn’t paying attention to updating software for widely known holes like this, not what you’d expect from a major IT company. What else isn’t being kept up to date?

Research In Motion decided they needed to get in on the action before the end of the year and blow up BlackBerry email during another maintenance gone awry. With the number of major outages on the net this year resulting from maintenance activities, I think most of these big companies need a refresher course in how to plan a maintenance without screwing it up.

12 Days of Data Center Christmas, 25GB of remote backup storage for just $20!

Vern, SwiftWater Telecom

Advertisements

Friday data center tidbits.


The talk of the town today is Google’s new Public DNS service. I can’t believe this is going to be much of an improvement over a well configured local DNS server. Call me a cynic, but I can’t believe Google will resist the temptation to screw with DNS results for their own profit. Deep 6 this turkey.

This morning I also see a story about VEPA, a new virtual switch standard for virtual servers. It certainly sounds like a good idea to allow virtual machines to take advantage of the advanced features of physical switches rather than dumping lots of extra load on the virtual machine host server. I’m just not convinced of the need for a lot of high powered bells and whistles in the VM virtual switch, not to mention what the cost of upgrading physical switches to this is going to be.

Our new Aurora Resilient Cloud Service virtual machine cloud computing service is open for business!

Vern, SwiftWater Telecom

Data center ops, Sweden’s DNS debacle …


Proving to anyone can serve as a bad example, we have the people responsible for the DNS foul up of Sweden’s .seTLD.

Analysis of the failure is that an update script left a “.” off each record, blowing up the works. You might think with something as important as a country TLD that people could avoid basic project management mistakes.

The first and most important part of any data center, system, or networking project is test, test, and test again. Since this script created a fatal error, a simple test of it would have revealed the fault before turning it loose on everything. If you turn an untested script loose, be prepared for the consequences.

The second part is to be able to test for things going wrong and to stop it before it cascades. As well as things are tested, it’s always possibly something might go wrong, the more complex the system, the more likely. This requires a test protocol to identify that things are going correctly and a way to stop it spreading if it doesn’t. 900,000 web sites effected proves the failure of this point.

The final point is a plan to back out any changes made as the final reaction to an unplanned failure. If you can’t forsee the problem, can’t keep the problem from impacting users, at least make the duration of the problem as short as possible by reversing changes, back to the previous version. Another critical point that didn’t get implemented.

So, follow these 3 simple rules and your project won’t end up be laughing stock of the Internet.

Vern, SwiftWater Telecom
data center, web hosting, Internet engineering