Data Center Planning & Feasibility Consulting Services
Sure it's Time to Upgrade your Data Center, but is your Plan Feasible?
Data Center Planning & Feasibility Consulting Services
Such designers and builders are often the unsung heroes of the data center. But not in CRN, as noted in the CRN 2015 Top Data Center Designers and Builders list. |
![]() |
According to a Ponemon Institute study, an outage can cost an organization an average of about $5,000 per minute. That’s $300,000 in just an hour. |
Peter Sacco, President & Founder, PTS Data Center Solutions, recently wrote a new white paper on UPS Configuration Availability Rankings.
Reliance on technology has elevated data center availability from a lofty goal to an absolute necessity. As such, the configuration of the UPS system is vitally important in achieving high-availability with respect to the power side of the universe. This paper explores a number of different UPS configurations, how they contribute to availability, and who manufactures them.
Pete's conclusion is that UPS configurations depend upon a number of factors including: level of availability required/desired (i.e. Tier class), IT load requirements, power input, and budget. Understanding these factors and their impact on UPS configuration and design will result in a suitable UPS purchase to meet user and IT load requirements.
For the complete white paper, please visit the PTS Media Library (log-in required) or contact us to receive a complimentary PDF.
Typically PTS focuses on high tech design considerations and solutions for your data center, computer room, server room, or network operations center. However, we are extremely impressed with the performance of the CleanZone Premier solution from UK Company, Dycem.
The product is designed to attract, collect, and retain contaminating particles which collect on your shoes before you enter the mission critical room.
To learn more about how Dycem products work, check out the PTS Data Center Design Channel Dycem video, click here or contact PTS.
Let me start by saying I have the utmost respect for the Uptime Institute’s Pitt Turner, P.E., John Seader, P.E., and Ken Brill and the work they have done furthering the cause of providing some standards to an otherwise standard-less subject like data center design. However, as a data center designer I feel their definitive work, Tier Classifications Define Site Infrastructure Performance, has passed its prime.
The Institute’s systems have been in use since 1995, which is positively ancient in the world of IT.
In its latest revision, the Uptime Institute’s Tier Performance Standards morphed from a tool for IT and corporate decision makers to consider the differences between different data center investments into a case study for consulting services pushing for certification against their standard.
While the data their standard is based upon has been culled from real client experiences, the analysis of the data has been interpreted by only one expert company, ComputerSite Engineering which works in close collaboration with the Uptime Institute. Surely, the standard could be vastly improved with the outside opinion and influence of many of the, just as expert, data center design firms that exist.
Case in point, the Uptime Institute has repeatedly defended the notion that there is no such thing as a partial tier conforming site (Tier 1+, almost Tier III, etc.). They argue that the rating is definitive and to say such things is a misuse of the rating guide. While I understand the argument that a site is only as good as its weakest link, to say that a site incorporating most, but not all of the elements of the tier definition is mathematically and experientially wrong.
PTS’ actual experiences bear this out. Our clients that have all the elements of a Tier II site, except for the second generator, are clearly better than those with no UPS and/or air conditioning redundancy (Tier I). Therefore, if not for Tier I+, how do they suggest to account for the vast realization between the real availability of the two sites?
It is interesting that most data center consulting, design, and engineering companies nationwide utilize elements of the white paper as a communications bridge to the non-facility engineering community, but not as part of their design process. In fact, most have developed and utilize their own internal rating guides.
While I will continue to utilize their indisputable expertise as a part of my own interpretation in directing PTS’ clients with their data center investment decisions, I suggest that clients would be wise not put all of their eggs in the Institute’s basket at this point in time.
What is your outlook on the Uptime Institute’s Tier Performance Standards? Is the four-tier perspective outdated or is it still a meaningful industry standard?
When working to design an effective data center cooling system, there are a number of commonly deployed data center cooling techniques that should not be implemented. They are:
Simply making the air colder will not solve a data center cooling problem. The root of the problem is either a lack of cold air volume to the equipment inlet or it is lack of sufficient hot return air removal from the outlet of the equipment. All things equal, any piece of equipment with internal fans will cool it self. Typically, equipment manufactures do not even specify an inlet temperature. They usually provide only a percentage of clear space the front and rear of the equipment must be maintained to ensure adequate convection.
CFD analysis conclusively proves that roof-mounted fans and under-cabinet air cut-outs will not sufficiently cool a cabinet unless air baffles are utilized to isolate the cold air and hot air sections. Without baffles, roof-mounted fan will draw not only the desired hot air in the rear, but also a volume of cold air from the front prior to being drawn in by the IT load. This serves only to cool the volume of hot air which we have previously established as a bad strategy. Similarly, providing a cut-out in the access floor directly beneath the cabinet will provide cold air to the inlet of the IT loads, however, it will also leak air into the hot aisle. Again, this only serves to cool the hot air.
Isolating high-density equipment
While isolating high-density equipment isn’t always a bad idea, special considerations must be made. Isolating the hot air is in fact, a good idea. However, the problem is in achieving a sufficient volume of cold air from the raised floor. Even then, assuming enough perforated floor tiles are dedicated to provide a sufficient air volume, too much of the hot air re-circulates from the back of the equipment to the front air inlet and combines with the cold air.
For more information on data center cooling, please download my newest White Paper, Data Center Cooling Best Practices, at http://www.ptsdcs.com/white_papers.asp. You can also view additional publications such as the following at our Vendor White Papers page: