Wednesday, July 29, 2009

Google Cools Data Center Without Chillers; Data Center Pros Weigh-in

Google’s chiller-less data center in Belgium has received a lot of buzz. The facility relies upon free air cooling to keep its servers cool and will shift the computing load to other data centers when the weather gets too hot.

It's an approach that stands to greatly improve energy efficiency. However, as e-shelter explained to Techworld, there are some risks. For instance, it's possible that airborne particulates could cause havoc with hard disk drives and dampness from heavy humidity could cause electrical problems. To see what other data center professionals think of this cooling strategy, I posed the following question to the Computer Room Design Group on LinkedIn:

Is Google's Chiller-less Data Center the wave of the future, or is this approach too risky for most businesses to accept?

Here’s what some of our group members had to say…

Mark Schwedel, Senior Project Manager at Commonwealth of Massachusetts:

Please note that Google is doing many thing that are not available in current data centers they do not have UPS - They are doing battery backup on each server with 12 volt battery - SO will this be the future? Only when the rest of world can delivery the same aspect as Google.

Sean Conner, Datacenter Professional Services Consultant:

Google's design is well suited for an expansion of their cloud environment. However, it's clear that the facility in question does not run as the same level of criticality as most dedicated or hardened sites. This works well in an environment that can tolerate minor equipment loss and failure.

However, most dedicated sites host applications and data that would suffer, should similar equipment loss occur. So, the two approaches cannot be truly compared. It's like trying to compare the heart to the left hand. Both are useful. But if the left hand fails, you probably don't die.

Perhaps a larger question to ask is: What applications, data, or entire enterprises could migrate to a cloud environment? Those that can stand to gain huge savings from Google's approach.


Dennis Cronin, Principal at Gilbane Mission Critical:

This entire dialog is moot because the way of the future is back to DIRECT WATER COOLED PROCESSORs. All these sites chasing the elusive "FREE" cooling will soon find out that they cannot support the next generation of technology.I suspect that there will be a lot of finger pointing when that occurs with even more adhoc solutions.We need to stick to quality solutions that will support today's AND tomorrow's technology requirements.

David Ibarra, Project Director at DPR Construction:

There is a tremendous pressure on large enterprise customers ( social, search,etc) to use the same fleet of servers for all of their applications. The IT architects behind the scene are now been asked to stop been "geeks" and changing hardware every 3 years and try to make use of what we have or improve with systems that are lower cost. The recession is also amplifying this trend. A lot of water cooled servers and demonstrations held last year have gone silent due to cost and also standardization on hardware for the next 5 years. A lot of large DC customers understand the water cooling technology and are early adopters; however realities have driven the effort elsewhere within their organizations. Customer are pushing high densities ( +300W/sqft) using best of class techniques: containments, free cooling,etc. Plus large scale operators are understanding that the building needs to suit the server needs so there is a shift on how a building is configured. Chiller less data centers have existed since 2006 in countries such as Canada, Ireland, Germany, Norway. Data centers will be coming online at the end of this year in the US that are chiller less and cooling tower less and with a extraordinary reduction of air moving equipment.

Nitin Bhatt, Sr. Engineer at (n)Code Solutions:

Every single Data Center is unique in its own set-up. To adopt some technology which is suiting to one geographical location could not be a wise decision. It is wise to be "Orthodox" rather than lossing the business. If someone can afford the outage / shifting of the work load to DR site or to some other sites as a result of the thermal events, yes they can look into FREE COOLING w/o Chillers. We can save the energy used by chillers having VFDs and room temperature based response to chillers. It is good to have chillers as backup to the Free Cooling.

So what do you think? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.

Wednesday, July 22, 2009

LinkedIn Discussion on Power Usage Effectiveness (PUE)

Last week I posted the following discussion question in our Computer Room Design networking group at LinkedIn.com. I’m really impressed with the response from group members, so I’d like to share their thoughts with you here:

How can the industry address problems with the reporting of Power Usage Effectiveness (PUE) without undermining the usefulness of the metric?

In a recent post in Data Center Knowledge, Rich Miller points out that the value of Power Usage Effectiveness (PUE) as the leading 'green data center' metric "has become fuzzy due to a disconnect between companies’ desire to market their energy efficiency and the industry’s historic caution about disclosure." [Source: http://www.datacenterknowledge.com/archives/2009/07/13/pue-and-marketing-mischief/]

What are your thoughts on redefining PUE? Are additional refinements the answer? Or does increasing the complexity of PUE undermine the usefulness of the metric?


ANSWERS:

• Gordon Lane, Facilities Coordinator at Petro Canada, explained:
I don't see a real value in PUE.

If you leave unused servers powered on you can keep your PUE low.

Assume you have a PUE of 2
2MW total power consumption gives you 1 MW for servers.
If you can reduce your server consumption to 0.75MW by turning off comatose servers total consumption reduces to 1.75MW and gives you a PUE of 2.33

I know there would be some reduction in a/c power usage due to less heat output from the turned off servers but if you are using legacy a/c units with no VFD style control then you will not get a corresponding electrical consumption reduction.


• Scot Heath, Data Center Specialist, weighed in with:
PUE is difficult to measure in mixed facilities, is muddied by configurations such as the Google every-server-has-a-battery and varies widely with Tier level. A universal measurement that combines both IT capability (total Specmarks for example) and availability with respect to energy consumption would be most useful. PUE does have the advantage of being quite easily understood and for controlled comparisons (like tier level, etc.) is very useful.


• Dave Cole, Manager of Data Center Maintenance Management and Education Services at PTS, responded:
Gordon and Scot bring up very good points. I have mixed feelings about PUE. The concept is easily understood - we want to maximize the power that is actually used for IT work. The interpretation of the value is easy to understand - lower is better (or higher is better in the case of DCiE). The problem I see is that it's almost been made too simplistic. You still have to know your data center and the impact of the decisions you make in regards to design and operation. You can actually raise your PUE by virtualizing or by turning off ghost servers as Gordon pointed out. What needs to be understood is that when you lower the demand side, you should also be making corresponding changes to the supply side. At the end of the day, PUE can be valuable as long as you are also looking at what impacts the value. You need to be able to answer the question of WHY your PUE is changing.


What are your thoughts on the value of Power Usage Effectiveness (PUE) as a metric? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.

Monday, July 20, 2009

LinkedIn Discussion on Eliminating the Battery String

Thanks to everyone who’s participated in our Computer Room Design networking group at LinkedIn.com so far! We’re off to a great start, with over 200+ members joining in the first two weeks. I’d like to share highlights from one of our recent discussions…

Kevin Woods, Director of Business Development and Sales at i2i, asked:

Eliminating the Battery String? Does anyone have experience/opinion on the viability of the UPS/CPS systems? They incorporate a flywheel in between the generator and engine and in cases of power interruption, the flywheel uses kinetic energy to power the generator for up to 30 seconds while the engine is engaged.


ANSWERS:

• Mark Schwedel, business partner at EMC and advisor for Green Rack Systems, recommended taking a look at the patent for an improved UPS/CPS system, which employs a high-efficiency uninterrupted power supply function integrated with an engine-generator set that combines both short term protection against momentary power interruptions with longer term power generation.

• Gordon Lane, Facilities Coordinator at Petro Canada, shared his experience:
Not a direct comparison to gen/engine set up but I have a flywheel UPS system that has been in service for 23 years. Very reliable, change the bearings every 50000 hours - about 6 years - and we have just about completed a program of taking the MGs out for cleaning and re-insulation.

Obviously coming to end of life, 20 yrs was estimated life, but the serviceability has been phenomenal.

Certainly looking to replace with a similar system and I believe Caterpillar has a flywheel UPS solution that they integrate into their diesel offerings.

• Jason Schafer, Senior Analyst at Tier1 Research, explained in part:
My personal issue with flywheel solutions, aside from the reliability that both sides will argue, is that 30 seconds simply isn't enough time when you are talking about the criticality most datacenters need. The most common argument relates to allowing time to manually start a generator; and flywheel advocates will say "if a generator doesn't start in 30 seconds it's not very likely that it's going to start in 20 minutes" - I disagree with this. I've seen, on more than one occasion, where generator maintenance was being performed and through human error the EPO switch on the generator was mistakenly left pushed in. There's no way anyone is going to identify the problem and fix it in 30 seconds - I'd be surprised if anyone even got to the generator house in 30 seconds after a power outage. Minutes, however, are a different story.

I'm not saying that flywheels and CPSs don't have their place - I think they do, or rather will in large scale in datacenters, but we're not quite there yet. When virtualization plays a part in the redundancy and fault tolerance of a datacenter, where ride-through in the event of a power outage is more of a convenience than a necessity (a-la Google's datacenters - they can lose an entire facility and continue on for the most part), you'll see flywheels gain more traction.


What are your thoughts on the viability of the UPS/CPS systems? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.

Thursday, July 16, 2009

Introducing PTS’ Data Center Education Series

How extensive is your knowledge about all aspects of your data center? With our newly launched Data Center Education Series, you will never look at your IT and support infrastructure the same way again.

PTS’ Data Center Education Series will help you better assess problems in your data center by providing you with substantive knowledge that you can take back to your data center to improve operations, availability, and efficiency - ultimately reducing operating cost and improving service delivery to your users.

The education series provides students with comprehensive, vendor-neutral, module based training led by the data center design experts from PTS. We discuss the most pertinent topics in the data center industry, tying in case studies and real world situations to provide the knowledge you need to understand, operate, manage, and improve your data center.

The Standard Training Series is a three (3) day class held multiple times per year at major cities across the United States, Canada, and Europe. Our next session will take place in Midtown NYC from September 15-17th -- visit our site to view the agenda. Can’t make it to NYC? We'll also be coming to Chicago (October 21-23) and Dallas (December 7-9). I encourage you to reserve your seat today, as space is limited.

The education series will cover the following topics:

• Fundamentals of Data Center Cooling
• Fundamentals of Data Center Management
• Fundamentals of Physical Security
• Fundamentals of Fire Protection
• Fundamentals of Data Center Power
• Fundamentals of Data Center Maintenance
• Fundamentals of Designing a Floor Plan
• Fundamentals of Data Center Cabling
• Fundamentals of Energy Efficiency

Priced at only $1,795 per student, the training includes all course materials in addition to a continental breakfast and lunch each day. Additionally if you attend with other colleagues from work, you'll all receive a 10% discount. You'll realize an ROI quickly from this invaluable and intimate knowledge, in which straight from data center experts in this in-depth, intimate training series.

Data Center Education Series – Customized for your needs!

We also offer education programs customized to your IT team’s needs. If you have a large group and need training, we can come to you and present those topics of most interest to you! Choose your desired location (typically your own facility). Choose the topics you want to see, including any or all of the available topics from the standard 3-day training class.

In addition, if you have a topic in mind you don't see currently listed in our offerings, we'll build it for you for only a nominal fee to cover time and material costs.

The Customized Training Series is priced at $15,000 for 2 days or $20,000 for 3 days plus travel expenses. In addition to the training, you have option to purchase a one-day data center site assessment for $5,000. This assessment will be performed prior to the training in order to allow the training to address issues found in the assessment.

Please join us on LinkedIn & Twitter

PTS is excited to provide our peers with a new online forum in which to discuss the planning, design, engineering, and construction of data centers and computer rooms.

If you’ve been reading our blog for a while, you may already be aware of our Facebook Page at https://www.facebook.com/PTSDataCenter. (A big ‘thank you’ to everyone who’s added themselves as fans!) Today, I’m happy to announce that PTS is further expanding our online presence with the goal of facilitating the open exchange of ideas among small-to-medium sized data center and computer room operators.

At the forefront of this effort is the newly created Computer Room Design networking group on LinkedIn.com. You can check it out by visiting http://www.linkedin.com/groups?gid=2099901. Hosted by the consultants and engineers at PTS Data Center Solutions, the group is an open forum in which professionals can share industry-related news, ideas, issues and experiences.

Membership is free and open to all professionals and vendors in the computer room and data center industry. We hope that industry leaders will look at this as an opportunity to share knowledge, discover new services and opportunities, and expand their networks.

So far, our networking group on LinkedIn.com has attracted broad interest, gaining more than 100 members in the first week alone. Featured discussions include best practices for consolidation strategies, how to combat downtime in the data center, and industry concerns regarding the Power Usage Effectiveness (PUE) metric.

This thought leadership is further supported on PTS’ Twitter profile (http://twitter.com/ptsdatacenter) which features the latest industry news, highlights from the LinkedIn networking group, and insights from our engineers. If you’re on Twitter, please send us a message and we’ll be sure to follow you back!

Monday, June 29, 2009

Energy Efficiency Remains Priority In Spite of Economic Troubles

In lean times, data centers are learning to do more with less. The Aperture Research Institute of Emerson Network Power just released the results of a study showing that, despite the global economic downturn, energy-efficiency is still a top-of-mind objective for many data centers. In fact, data center managers are concentrating on resolving efficiency issues as a way to balance increasing demand for IT services with stagnant budgets.

The report reveals that:

Data center managers will look at ways to squeeze more from their existing resources, with 80 percent of those surveyed saying they can create at least 10 percent additional capacity through better management of existing assets. Thirty percent of those surveyed said they could find an additional 20 percent. There is likely to be a revitalized focus on tools that provide insight into resource allocation and use.

Data centers will also look to green initiatives to help manage their operating expenses, with 87 percent of those surveyed having a green initiative in place and the majority expecting to continue or intensify these efforts.


The survey data also suggests that the downturn will have "little effect on the demand for IT services" – a positive indicator for economic recovery. I recommend downloading the full Research Note as a PDF at the Aperture Research Institute’s website. It’s an interesting read.

Wednesday, June 24, 2009

Investing in Energy-Efficient Equipment

In "Taking Control of the Power Bill", Bruce Gain takes a look at how many data center admins are retooling their IT infrastructures’ power needs to accommodate growth and slash costs. He notes that although many admins struggle with having to pay additional costs associated with switching to more eco-efficient server room cooling, airflow designs, and other related equipment, paying for more expensive yet efficient equipment is a smart investment when you look at the big picture.

In order to justify that investment, admins should calculate the ROI offered by different scenarios. By creating models to outline the costs of ownership for different configurations and doing a full costs-benefits analysis, you can ease the decision making process. Once you begin making the switch to a more energy-efficient approach, it’s recommended that your organization phase in new equipment as part of the natural growth and evolution of your IT systems.

Michael Petrino, vice president of PTS, also offers his thoughts on the subject, providing a concrete example of cheaper yet less efficient components vs. more power-efficient but costly alternatives. I encourage you to check out the full article in Vol.31, Issue 17 of PROCESSOR.

Tuesday, June 09, 2009

Drug Companies Put Cloud Computing to the Test

Traditionally characterized as "late adopters" when it comes to their use of information technology (IT), major pharmaceutical companies are now setting their sights on cloud computing.

Rick Mullin at Chemical & Engineering News (C&EN) explores how Pfizer, Eli Lilly & Co., Johnson & Johnson, Genentech and other big drug firms are now starting to push data storage and processing onto the Internet to be managed for them by companies such as Amazon, Google, and Microsoft on computers in undisclosed locations. In the cover story, “The New Computing Pioneers”, Mullin explains:

“The advantages of cloud computing to drug companies include storage of large amounts of data as well as lower cost, faster processing of those data. Users are able to employ almost any type of Web-based computing application. Researchers at the Biotechnology & Bioengineering Center at the Medical College of Wisconsin, for example, recently published a paper on the viability of using Amazon's cloud-computing service for low-cost, scalable proteomics data processing in the Journal of Proteome Research (DOI: 10.1021/pr800970z).”


While the savings in terms of cost and time are significant (particularly in terms of accelerated research), this is still new territory. Data security and a lack of standards for distributed storage and processing are issues when you consider the amount of sensitive data that the pharmaceutical sector must manage. Drug makers are left to decide whether it’s smarter to build the necessary infrastructure in-house or to shift their increasing computing burdens to the cloud.

Friday, June 05, 2009

Data Center Professionals Network

The other day I stumbled across the Data Center Professionals Network, a free online community for professionals from around the world who represent a cross section of the industry. Members include data center executives, engineering specialists, equipment suppliers, training companies, real-estate and building companies, colocation and wholesale businesses, and industry analysts. The recently launched networking site enables key players in the industry to easily connect, interact, and develop business opportunities.

According to Maike Mehlert, Director of Communications:

The Data Center Professionals Network has been set up to be a facilitator for doing business. It acts as a one-stop-shop for all aspects of the data center industry, from large corporations looking for co-location or real estate, or data centers looking for equipment suppliers or services, to engineers looking for advice or training.


Features of the social network include a personalized user profile, as well as access to job boards, business directories, press releases, classified ads, white papers, photos, videos and events.

I haven’t had a chance to join yet but if you want to check it out, visit http://www.datacenterprofessionals.net/ (you can sign in using a Ning ID if you already have one). If you do visit the site, post a comment and let me know what you think.

Wednesday, May 20, 2009

How Big Should Large Screen Displays Be In Your Command & Control Room?

Many A/V planners are challenged with how big large screen displays should be in their command & control rooms. There are actually some fairly complicated calculations that can be done which will help you determine what the minimum character size (sometimes referred to as 'x' size) should be under a given circumstance. This 'x-size' is defined as being the height of the smallest coherent element within the presented material. Think of this in terms of a lower-case letter x.

This lower case 'x' - which really is the same height as the smallest of the lower case letters - should subtend not less than 10 arc minutes on a viewers retina to be recognized at any viewing distance. This becomes more complicated when viewers are located off axis to the center of the screen as this requires a larger subtended angle and there is some affect as a result of colored symbols, amount of time the image is on screen, etc. As you can imagine... if you were sizing a screen for projection of a spreadsheet in order to go order to review your Data Center metrics you might want to use these calculations (which can be found in this ICIA publication) http://www.infocomm.org/cps/rde/xchg/infocomm/hs.xsl/9229.htm

Or a good free presentation on this subject can be found at: http://www.educause.edu/Resources/DesignStandardsandPracticesfor/155327