When we introduced you to Windows Live Core (now Windows Live Platform Services) just over a year ago, we mentioned James Hamilton and his work on commoditizing server installations, in particular by using containers. Well here we are a year later, and Microsoft is implementing this idea in its new data center in Northlake, Illinois. While many web 2.0 pundits talk about running services in the cloud, its going to be up to companies like Microsoft, and people like James Hamilton, to build out the infrastructure needed to provide cost effective, geo-located, and energy efficient services.
In his blog Perspectives, Hamilton has been writing about a number of ideas around data centers and building out the infrastructure. In talking about a service free, fail in place model, using containers, he says:
Going to a service free-model can save service costs but, even more interesting, in this model the servers can be packaged in designs optimized for cooling efficiency without regard to human access requirements. If technicians need to go into a space, then the space needs be safe for humans and meet multiple security regulations, a growing concern, and there needs to be space for them. I believe we will get to a model where servers are treated like flash memory blocks: you have a few thousand in a service-free module, over time some fail and are shut off, and the overall module capacity diminishes over time. When server work done/watt improves sufficiently or when the module capacity degrades far enough, the entire module is replaced and returned to the manufacturer for recycling.
In today’s post, he compares the efficiencies of a) building a data center, or b) placing 1125 racks of servers, one each in a condominium(!). Taking the container idea to the extreme, perhaps, but it’s a good read. When I commented that a rack of servers might be one of the better room-mates I have had, Hamilton responded:
Our goal is to draw attention to what makes a "real" data center expensive. It’s not the security, it’s not the shell (the building), it’s none of those things. Typically over 75%, and sometimes more, of the cost of a data center is power and cooling. In the example above, that’s $150M spent on power distribution and cooling equipment. I smell opportunity.
I just happen to be reading a novel, "The Invention of Everything Else", by Samantha Hunt. In it, Nicola Tesla spends his last days in the Hotel New Yorker (a fictional account with a basis in truth). At one point the author describes the electrical system outside Tesla’s hotel window:
"Years ago power lines would have stretched across the block in a mad cobweb, a net, because years ago, any company that wanted to provide New York with electricity simply strung its own decentralized power lines all about the city before promptly going out of business or getting forced out by J.P. Morgan. But now there is no net. The power lines have been hidden underground."
That’s pretty much where we are right now; just getting to the point of stringing up the wires, of building the mad cobweb. Microsoft knows this and is racing to become a J.P. Morgan. It is one of the reasons why Microsoft is hot to acquire Yahoo!, although Steve Ballmer would love for you to think it is just a desperate move against big bad Google. While a move to web services might mean the end of Office (as we know it now, anyway), the demand for secure, cost effective, reliable, and at least somewhat green cloud infrastructure is going to grow immensely. Microsoft seems to be intent on being a leading provider of those services.