Microsoft is dumping its Data Centres into the Pacific Ocean

169
  • Air conditioning is one of the biggest costs in running data centres. Traditional data centres use as much electricity for cooling as they do for running the actual IT equipment. Accordingly, much of the innovation seen in the high-density cloud server space has been to develop data centres that are cheaper to cool and hence cheaper to run. With its much higher heat capacity than air, water has become the coolant of choice, pumped around and between the computers to transport their heat outside.
  • In case you didn’t already know, dumping computer equipment into water is generally not a very good idea. This fact is mainly the reason why Microsoft’s dumping of data centres is so interesting.
  • Computer scientists and architects have employed all kinds of methods for keeping data centres cool, frombuilding data centres in cool climates to putting to using heat from data centres to warm buildings and heat water.
  • But Microsoft has a different idea: dump the servers deep in the ocean, where the cool temperatures of the surrounding water will keep the servers cool 24/7, regardless of the seasons on the surface.

 

  • Data centresare basically buildings containing various computer equipment that process all of the internet we use. With the increase in the usage of cloud-based services and various other internet provisions, data centres are in such a high demand right now. But the problem that comes with them is that they are so expensive to maintain. Not only do they consume a lot of energy, most of the energy they consume is spent on the cooling system that prevents the components from overheating.
  • Microsoft has demonstrated an experimental prototype of a new approach: instead of pumping water around the data centre, put the data centre in the water. Project Natick is a research project to build and run a data centre that’s submerged in the ocean. The company built an experimental vessel, named theLeona Philpot, and deployed it on the seafloor about 1 kilometre off the Pacific coast. It ran successfully from August to November last year.
  • As well as the obvious cooling advantage this brings, Microsoft argues that this kind of data centre will bring other benefits, too. About half of the world’s population lives within 200km of the ocean, and so the ability to put data centres in the water means that they can always be located close to major population centres. This in turn ensures that they offer low latency connections. The company also says that the self-contained units can be deployed quickly, within 90 days, rather than the 2 years it takes to build a conventional building, or the 1 year that Microsoft says its fourth generation data centres take. The units could also be paired with tidal power generation to further reduce their environmental impact.
  • Whether we’ll actually see Natick-style data centres on the world remains to be seen, with Microsoft saying that it’s still “early days” in evaluating whether the concept can actually be adopted.
  • Being stuck on the bottom of the ocean does have some obvious issues: Microsoft doesn’t have a team of SCUBA IT staff to fix things. Instead, the concept is that each Natick unit would operate for five years without maintenance and then be hauled up to the surface to have its internals replaced.
  • Many of Natick’s concepts are already in wide use. There’s a good reason that most power plants are built near rivers and oceans: it’s because they use those rivers and oceans as a convenient heat sink in which they can dump their unwanted thermal energy, and this hasn’t gone ignored by cloud operators. Google in 2011 opened a data centre usingsea water to provide its cooling, and since 2009 Microsoft has used shipping containers packed with servers and network infrastructure to provide rapidly deployable, modular, self-contained data centre deployments. Natick puts these two ideas together.
Share.

About Author

Comments are closed.