Wednesday, May 07, 2008

The National Data Center Energy Efficiency Information Program

Matt Stansberry, editor of SearchDataCenter.com, posted a blog entry on the National Data Center Energy Efficiency Information Program. I'm echoing his post here because energy efficiency is such a critical issue for the industry.

The U.S. Department of Energy (DOE) and U.S. Environmental Protection Agency (EPA) have teamed up on a project with aims to help reduce energy consumption in data centers. In addition to providing information and resources which promote energy efficiency, the National Data Center Energy Efficiency Information Program is reaching out to data center operators and owners to collect data on total energy use.

In the words of the EPA’s Andrew Fanara:

We've put out an information request to anyone who has a data center to ask if you would measure the energy consumption of your data center in a standardized way and provide that to us. That will help us get a better handle on what's going on nationally in terms of data center energy consumption.


Hear the EPA’s Andrew Fanara talk about the program in this video from the Uptime Institute Symposium:



If you’d like to get your data center involved, more information can be found at the EPA's ENERGY STAR data center website and the DOE's Save Energy Now data center website.

Friday, April 25, 2008

CFD Modeling for Data Center Cooling

Computational fluid dynamics (CFD) modeling is a valuable tool for understanding the movement of air through a data center, particularly as air-cooling infrastructure grows more complex. By using CFD analysis to eliminate hot spots, companies can lower energy consumption and reduce data center cooling costs.
Mark Fontecchio at SearchDataCenter.com has written a great new article on the subject, entitled “Can CFD modeling save your data center?”. Fontecchio examines the use of CFD analysis as a tool for analyzing both internal and external data center airflow.
In the article, Carl Pappalardo, IT systems engineer for Northeast Utilities, provides a first-hand account of how CFD analysis helped in optimizing their data center’s cooling efficiency. Allan Warn, a data center manager at ABN AMRO bank, also shares his thoughts on the value of renting vs. buying CFD modeling software. Fontecchio also includes insights from industry experts, including Ernesto Ferrer, a CFD engineer at Hewlett-Packard Co., and from yours truly, Pete Sacco.
For more information on data center cooling, download my White Paper, “Data Center Cooling Best Practices”, at http://www.pts-media.com (PDF format). You can also download additional publications, like Vendor White Papers, from the PTS Media Library.
To learn more about how PTS uses CFD modeling in the data center design process, please visit: http://www.ptsdcs.com/cfdservices.asp.

Tuesday, April 15, 2008

Free White Paper on Relative Sensitivity Fire Detection Systems

Fire detection is a challenge in high-end, mission critical facilities with high-density cooling requirements. This is due primarily to the varying levels of effectiveness of competing detection systems in high-velocity airflow computer room environments.

In a new white paper, PTS Data Center Solutions’ engineers Suresh Soundararaj and David Admirand, P.E. identify and analyze the effectiveness of relative sensitivity-based fire detection systems in a computer room utilizing a high-density, high-velocity, and high-volume cooling system.

In addition to examining the differences between fixed sensitivity and relative sensitivity smoke detection methodologies, Soundararaj and Admirand detail the results of fire detection tests conducted in PTS’ operational computer room and demo center using AirSense Technology’s Stratos-Micra 25® aspirating smoke detector.

The illustrated 13-page white paper, entitled “Relative Sensitivity-based Fire Detection Systems used in High Density Computer Rooms with In-Row Air Conditioning Units,” is available for download on our website in PDF format.

Tuesday, March 25, 2008

Reflections on the DataCenterDynamics Conference

Earlier this month, I had the honor of speaking in two separate sessions at the DatacenterDynamics Conference & Expo in New York City.

My first presentation, "The Impact of Numerical Modeling Techniques on Computer Room Design and Operations," was well received by its 60 or so attendees. Based on audience feedback provided both during and after the presentation, I think people really appreciated the practical examples and case studies of lessons learned since PTS began utilizing 3-D computational fluid dynamic (CFD) software as a tool for designing cooling solutions.

My second stint, with co-presenter Herman Chan from Raritan Computer, Inc., was equally well received. Our presentation on "Stop Ignoring Rack PDUs" described the research both our companies have undertaken regarding rack-level, IT equipment, real-time power monitoring.

As part of our presentation we displayed the results of our power usage study of PTS’ computer room. The data revealed that 58% of the total power consumption of PTS’ computer room is consumed and dissipated as heat by the IT critical load. This proves to be far better than some other industry data. In the coming months, both Raritan and PTS hope to release a co-written white paper documenting the results of our study.

Overall, the DatacenterDynamics’ show was even better attended and better sponsored than it was in 2007. I estimate there were some 500-700 people in attendance for the event. If the trend holds true from last year, about 50% of them were data center operators. The balance of the attendance was made up of consultants, vendors, and others.

This show has become my favorite regional data center industry event because of its unique single day format and for the quality of the content provided by its featured speakers (and I’m not just saying that because I’m a presenter). Many shows of this type turn into a commercial for the vendors that pay good money to sponsor the event.

What sets DataCenterDynamics apart is that the event organizers demand that each presentation be consultative in nature. Additionally, they make every effort to review and comment on each presentation before the event. If you haven’t attended this key data center industry event yet, I hope you’ll get the chance to do so in the near future.

Have you attended DataCenterDynamics? What sessions did you find most valuable? Please leave a comment to share your experience.

Tuesday, February 26, 2008

Are the Uptime Institute's Data Center Rating Tiers Out of Date?

Let me start by saying I have the utmost respect for the Uptime Institute’s Pitt Turner, P.E., John Seader, P.E., and Ken Brill and the work they have done furthering the cause of providing some standards to an otherwise standard-less subject like data center design. However, as a data center designer I feel their definitive work, Tier Classifications Define Site Infrastructure Performance, has passed its prime.

The Institute’s systems have been in use since 1995, which is positively ancient in the world of IT.

In its latest revision, the Uptime Institute’s Tier Performance Standards morphed from a tool for IT and corporate decision makers to consider the differences between different data center investments into a case study for consulting services pushing for certification against their standard.

While the data their standard is based upon has been culled from real client experiences, the analysis of the data has been interpreted by only one expert company, ComputerSite Engineering which works in close collaboration with the Uptime Institute. Surely, the standard could be vastly improved with the outside opinion and influence of many of the, just as expert, data center design firms that exist.

Case in point, the Uptime Institute has repeatedly defended the notion that there is no such thing as a partial tier conforming site (Tier 1+, almost Tier III, etc.). They argue that the rating is definitive and to say such things is a misuse of the rating guide. While I understand the argument that a site is only as good as its weakest link, to say that a site incorporating most, but not all of the elements of the tier definition is mathematically and experientially wrong.

PTS’ actual experiences bear this out. Our clients that have all the elements of a Tier II site, except for the second generator, are clearly better than those with no UPS and/or air conditioning redundancy (Tier I). Therefore, if not for Tier I+, how do they suggest to account for the vast realization between the real availability of the two sites?

It is interesting that most data center consulting, design, and engineering companies nationwide utilize elements of the white paper as a communications bridge to the non-facility engineering community, but not as part of their design process. In fact, most have developed and utilize their own internal rating guides.

While I will continue to utilize their indisputable expertise as a part of my own interpretation in directing PTS’ clients with their data center investment decisions, I suggest that clients would be wise not put all of their eggs in the Institute’s basket at this point in time.

What is your outlook on the Uptime Institute’s Tier Performance Standards? Is the four-tier perspective outdated or is it still a meaningful industry standard?

Friday, February 15, 2008

Facebook user? Add yourself as a fan of our blog!

Do you have a Facebook account? If so, you can help spread the word about the Data Center Design blog by joining our newly created Facebook Page. Be among the first to hear about blog updates, speaking engagements and other upcoming events.

Click here to view the Facebook Page for PTS Data Center Solutions and add yourself as a Fan.

Wednesday, February 13, 2008

Are “free” computer room site assessment services worth the money you pay for them?

It has become commonplace for the myriad of IT and support infrastructure OEMs to offer free site assessment services in an effort to woo clients into purchasing their equipment.

While it was already difficult enough for small- to mid-size design consulting service providers to build credibility and brand-identity in the ultra-competitive world of computer room design, in the past few years these firms have seen some of their most valuable vendor partners become chief competitors.

This is not just a case of sour apples. The design services provided by most OEMs do their clients a disservice. Clients are usually only provided the part of the picture that suits the manufacturer and they are forced to fill in the blanks. Unfortunately, the blanks are often not identified. This leads to some very unhappy bean counters.

One leading power and cooling system manufacturer’s entire go-to-market strategy is based on allowing inexperienced enthusiasts to represent themselves as capable designers by providing them with access to an online configuration tool. Being an expert in its use myself, I can safely say the information it provides is rudimentary at best. Our team at PTS Data Center Solutions uses this tool only for ordering purposes and never for design. These online tools are being used by the manufacturer’s own systems engineers, reseller partners, or end users themselves to try to simplify the inherently complicated subject of computer room support infrastructure design.

The manufacturer’s configuration tool only provides solution recommendations for the equipment they manufacture. Much of the rest of the complete solution is missing, including the infrastructure they don’t sell, the labor to install any of it, and/or the engineering services to produce the design documentation required to file the necessary permits. Worse, little advice is provided as to the best project delivery methodology. While I would be the first to admit the traditional consulting engineering community has been slow to adapt to the latest design practices, the truth remains that as-a-matter-of-course changes to facilities still require the services of a licensed engineer. This includes the sizing of the power and cooling infrastructure.

That’s not to say the use of tools doesn’t have its place. Any consultant-recommended solutions should always be based on sound engineering using the latest technologies, such as computational fluid dynamic (CFD) modeling.

Individuals seeking computer room solutions are better served by hiring experienced, licensed, capable design engineers that are well versed in all of the major infrastructure solutions. This ensures that for a moderate amount of money spent in the planning stage you come away with a properly designed project with a well-defined scope, schedule, and budget.

Tuesday, February 05, 2008

PTS’ 2008 Predictions for the Data Center Industry

I consider myself a veteran of the data center design industry. Additionally, I have the good fortune to visit as many as fifty data centers and computer rooms in the course of a year. And while I have seen good ones and bad ones, they all seem to share certain commonalities. As a result of my experiences and research covering a broad scope of concerns, I have compiled a list of the challenges the data center industry as a whole will face over the next few years.

The talent pool of senior level experts is disappearing. Worse yet, as a nation we have not educated tomorrow’s engineers and/or technicians. This severe lack of experts will be an ever present obstacle to sustainable corporate growth due to technology evolution. In turn, this threatens the nation’s overall economic growth and will cause the United States to fall as the technical leader of the world. Our only saving grace will be to embrace the new world order and adapt to global solutions.

The original equipment manufacturers will own the data center design space. This is their best recourse in maintaining an ability to sell their ever improving infrastructure to customers with old, out-dated, ill-prepared facilities. A further prediction is that it will be difficult for these OEMs to provide heterogeneous and not self-serving designs. And even if they can, will clients believe it to be so?

Big surprise, data centers and computer rooms nationwide are running out of power, cooling, and space. Furthermore, due to the high capital cost and the time it takes to undertake a computer room improvement project, operators will choose not to. My prediction is that this will lead to business impacting disruptions for at least 20% of businesses over the next three (3) years.

We will run out of utility power producing capacity as a nation before the technical revolution is over. Furthermore, no amount of ‘green’ building will prohibit this from inevitably happening. Like virtualization has been for processing capacity, ‘greenness’ is only an incremental band aid on the proverbial bullet wound. My prediction is that the U.S. will experience more wide area outages, such as the one in August of 2003, in the near future.

As the saying goes, ‘necessity is the mother of invention’. We had better hope so. My final prediction is that our technological leadership as a nation will be saved not by a band aid application, by a grassroots conservation effort, or by sheer will alone. Ultimately, it will be saved by a sweeping improvement in the efficiency of how power is used by IT infrastructure. Materials research within the semiconductor industry will yield a massive reduction in the power dissipation of IT infrastructure. As a result, companies worldwide will take advantage by refreshing their IT equipment, thus allowing them to survive using their existing aging facilities and support infrastructure.

What is your number one prediction for the industry in the coming years? Whether you’re optimistic or foresee doom and gloom, I would love to hear what you think.

PTS' 2008 Predictions for the Data Center Industry

I consider myself a veteran of the data center design industry. Additionally, I have the good fortune to visit as many as fifty data centers and computer rooms in the course of a year. And while I have seen good ones and bad ones, they all seem to share certain commonalities. As a result of my experiences and research covering a broad scope of concerns, I have compiled a list of the challenges the data center industry as a whole will face over the next few years.



The talent pool of senior level experts is disappearing. Worse yet, as a nation we have not educated tomorrow’s engineers and/or technicians. This severe lack of experts will be an ever present obstacle to sustainable corporate growth due to technology evolution. In turn, this threatens the nation’s overall economic growth and will cause the United States to fall as the technical leader of the world. Our only saving grace will be to embrace the new world order and adapt to global solutions.



The original equipment manufacturers will own the data center design space. This is their best recourse in maintaining an ability to sell their ever improving infrastructure to customers with old, out-dated, ill-prepared facilities. A further prediction is that it will be difficult for these OEMs to provide heterogeneous and not self-serving designs. And even if they can, will clients believe it to be so?



Big surprise, data centers and computer rooms nationwide are running out of power, cooling, and space. Furthermore, due to the high capital cost and the time it takes to undertake a computer room improvement project, operators will choose not to. My prediction is that this will lead to business impacting disruptions for at least 20% of businesses over the next three (3) years.


We will run out of utility power producing capacity as a nation before the technical revolution is over. Furthermore, no amount of ‘green’ building will prohibit this from inevitably happening. Like virtualization has been for processing capacity, ‘greenness’ is only an incremental band aid on the proverbial bullet wound. My prediction is that the U.S. will experience more wide area outages, such as the one in August of 2003, in the near future.



As the saying goes, ‘necessity is the mother of invention’. We had better hope so. My final prediction is that our technological leadership as a nation will be saved not by a band aid application, by a grassroots conservation effort, or by sheer will alone. Ultimately, it will be saved by a sweeping improvement in the efficiency of how power is used by IT infrastructure. Materials research within the semiconductor industry will yield a massive reduction in the power dissipation of IT infrastructure. As a result, companies worldwide will take advantage by refreshing their IT equipment, thus allowing them to survive using their existing aging facilities and support infrastructure.



What is your number one prediction for the industry in the coming years? Whether you are optimistic or foresee doom and gloom, I would love to hear what you think.

Wednesday, January 30, 2008

2008 Data Center Industry Trends

A recent article from Network World points to security as the dominant issue for the data center design industry in 2008. Potential threats identified by experts include:

  • Malware attacks which piggy back on major events such as the ‘08 Olympics or the US Presidential Elections
  • The opportunity for the first serious security exploit in corporate VoIP networks
  • Additional malware vulnerability for users as participation in Web 2.0 continues to grow

Other important issues for 2008 as identified by Network World staff include:
  • The early adoption of 802.11n WLAN technology
  • A shift in IT’s approach to managing mission critical environments as virtualization and green computing are deployed more broadly
  • The growing acceptance of open source technology at the corporate level
  • Tightly controlled budgets as IT spending growth drops (particularly in response to news of economic recession)
  • Increased demand for “IT hybrids” – professionals with both business acumen and technical know-how – as the most sought-after hires

Source: Security dominates 2008 IT agenda