Wednesday, July 29, 2009

Google Cools Data Center Without Chillers; Data Center Pros Weigh-in

Google’s chiller-less data center in Belgium has received a lot of buzz. The facility relies upon free air cooling to keep its servers cool and will shift the computing load to other data centers when the weather gets too hot.

It's an approach that stands to greatly improve energy efficiency. However, as e-shelter explained to Techworld, there are some risks. For instance, it's possible that airborne particulates could cause havoc with hard disk drives and dampness from heavy humidity could cause electrical problems. To see what other data center professionals think of this cooling strategy, I posed the following question to the Computer Room Design Group on LinkedIn:

Is Google's Chiller-less Data Center the wave of the future, or is this approach too risky for most businesses to accept?

Here’s what some of our group members had to say…

Mark Schwedel, Senior Project Manager at Commonwealth of Massachusetts:

Please note that Google is doing many thing that are not available in current data centers they do not have UPS - They are doing battery backup on each server with 12 volt battery - SO will this be the future? Only when the rest of world can delivery the same aspect as Google.

Sean Conner, Datacenter Professional Services Consultant:

Google's design is well suited for an expansion of their cloud environment. However, it's clear that the facility in question does not run as the same level of criticality as most dedicated or hardened sites. This works well in an environment that can tolerate minor equipment loss and failure.

However, most dedicated sites host applications and data that would suffer, should similar equipment loss occur. So, the two approaches cannot be truly compared. It's like trying to compare the heart to the left hand. Both are useful. But if the left hand fails, you probably don't die.

Perhaps a larger question to ask is: What applications, data, or entire enterprises could migrate to a cloud environment? Those that can stand to gain huge savings from Google's approach.


Dennis Cronin, Principal at Gilbane Mission Critical:

This entire dialog is moot because the way of the future is back to DIRECT WATER COOLED PROCESSORs. All these sites chasing the elusive "FREE" cooling will soon find out that they cannot support the next generation of technology.I suspect that there will be a lot of finger pointing when that occurs with even more adhoc solutions.We need to stick to quality solutions that will support today's AND tomorrow's technology requirements.

David Ibarra, Project Director at DPR Construction:

There is a tremendous pressure on large enterprise customers ( social, search,etc) to use the same fleet of servers for all of their applications. The IT architects behind the scene are now been asked to stop been "geeks" and changing hardware every 3 years and try to make use of what we have or improve with systems that are lower cost. The recession is also amplifying this trend. A lot of water cooled servers and demonstrations held last year have gone silent due to cost and also standardization on hardware for the next 5 years. A lot of large DC customers understand the water cooling technology and are early adopters; however realities have driven the effort elsewhere within their organizations. Customer are pushing high densities ( +300W/sqft) using best of class techniques: containments, free cooling,etc. Plus large scale operators are understanding that the building needs to suit the server needs so there is a shift on how a building is configured. Chiller less data centers have existed since 2006 in countries such as Canada, Ireland, Germany, Norway. Data centers will be coming online at the end of this year in the US that are chiller less and cooling tower less and with a extraordinary reduction of air moving equipment.

Nitin Bhatt, Sr. Engineer at (n)Code Solutions:

Every single Data Center is unique in its own set-up. To adopt some technology which is suiting to one geographical location could not be a wise decision. It is wise to be "Orthodox" rather than lossing the business. If someone can afford the outage / shifting of the work load to DR site or to some other sites as a result of the thermal events, yes they can look into FREE COOLING w/o Chillers. We can save the energy used by chillers having VFDs and room temperature based response to chillers. It is good to have chillers as backup to the Free Cooling.

So what do you think? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.