Wednesday, September 30, 2009

New York Jets Power Camp 2009

Thank you to everyone who joined PTS Data Center Solutions and the New York Jets last night at Power Camp 2009, hosted at the new Jets Training Facility in Florham Park, NJ.

We kicked off the training event with the Power Players Buffet … after all, if you want to be a pro you have to eat like a pro. There were about 80 people in attendance and it was great getting the opportunity to talk with everyone.

Together with the folks from APC, Avocent and Packet Power, we tackled a range of data center power issues during our Power Drills, including techniques for effective management, monitoring, availability and control.

Mike Petrino, vice president of PTS, gave the crowd a tour of the data center we designed for the NY Jets Training Facility:

All in all, the Power Camp training event was a huge success. Highlights for me included our field goal kicking contest, hanging out with NY Jets legend Bruce Harper and coaching my junior football team, the Franklin Lakes War Eagles, during a scrimmage on the Jets practice field under the lights.

Talking with Bruce Harper, the all-time kick returner in New York Jets history, at Power Camp:

Coaching the Franklin Lakes War Eagles on the Jets practice field:

Field goal kicking contest for attendees of PTS' Power Camp:

I hope everyone who attended enjoyed the event as much as I did. If you want to see more photos from this year’s Power Camp, please visit our Facebook Page at

Sunday, September 27, 2009

Inflection point: Build for Higher Density or Plan for Efficient IT?

Over the last decade, the focus of the Data Center Industry has been to plan & renovate feverishly to support higher densities. Not too much of a surprise because there was actually an uptick in the scale of Morse's Law over the last decade as processing power, processing density & power consumption per rack unit all had risen faster than the industry had ever experienced.

Over the last few years the server manufacturers started to pay attention to power consumption as many of their clients couldn't deploy the new technology or had to wait until renovations or new facilities became available to upgrade to the newer servers that consumed more power in a smaller footprint. You are starting to see some products on the market that reverse the decade long trend & use less power. From innovations in operating systems that fine tune power usage as shown in this recent article by IBM:

To Intel with its new Xeon 5500 series processors that is delivering up to 2.25x better performance and up to 3.5x improved system bandwidth are delivered in the same power envelope compared to Intel®Xeon®processor 5400. This processor also uses up to 50% lower idle power consumption during low utilization periods.

What is this forward thinking leading to? I believe we are going to cross the inflection point in the next couple of years where the high density environments we have or are constructing will outpace the power consumption demand of the new processors & servers we will need to deploy. It is difficult to say exactly when the big power saving breakthrough will happen at the chip level, but I think we all know it will happen. You don't want to be the last guy who built a MW facility @ 300 watts per square foot that now only needs 500KW & 150 watts per square floor. We often consider modular solutions that can scale up our density & capacity, but keep in mind that someday soon we may need to consume less power & cooling so we should make sure that our design is efficient at 50% or 30% of our design as well. Not just due to the inflection point where server power consumption will drop below data center power demand that Julius Neudorfer describes in the below article, but because our business requirements can also change where we won't need as much processing power to run our business.

Friday, September 18, 2009

PTS & The New York Jets Invite You to Power Camp '09

PTS, in collaboration with the New York Jets, is excited to invite you to Power Camp ’09. Tackle power issues before they result in a defensive meltdown and make sure that your Data Center is powered up for many more winning seasons!

The three hour Power Camp includes a buffet dinner and 3 intense drills that teach the latest techniques and solutions for effective power monitoring and control, followed by a tour of the state-of-the-art data center PTS engineered and built for the New York Jets. Be sure to stay for the field goal kicking contest and to meet famous NY Jet, Bruce Harper!

For more information and to view the agenda, please visit our website at

If you’d like to attend Power Camp ’09, please RSVP by 9/23/2009 to Amy Yencer,, 201-337-3833 x128.

Wednesday, September 02, 2009

Role of the CIO in Business Continuity, Disaster Recovery

Ralph DeFrangesco at ITBusinessEdge posted the following discussion question in their forums recently.
Corporations often confuse business continuity and disaster recovery. They also tend to put the CIO in charge of both. Should the CIO be the point person for both BC and DR? If so, why? If not why and who should it be?
It resulted in an interesting debate on the role of the CIO, so I reposted it on LinkedIn for so the members of our Computer Room Design Group could weigh in. Here are some of the insights they had to share...

Ken Cameron, IT Infrastructure & Outsourcing Executive:
The CIO should own Disaster Recovery. The business side (someone in Risk Management, Corporate Security, etc.) should own Business Continuity. The IT group should be represented on the Business Continuity council. IT plays a major role in Business Continuity, but does NOT own it.

IF the CIO gets Business Continuity, it needs to be made clear that his BCP responsibility is NOT part of his IT responsibility.

Christopher Furey, Managing Partner at Imaginamics:
This is one of those issues where it's a bit like asking the fox to watch the hen house. Only very small or inexperienced management teams put IT in charge of BC. The scope of the risk analysis is usually way beyond the skills of an IT Director or CIO, and even when it's not, business risk oversight is critical.

Ken is spot on. The CIO must be in charge of DR and IT Systems Continuity but not BC. Any CIO who wants to keep their job will work in tandem with Risk Management and key stakeholders on the business side to ensure critical business functions and the systems that support them are well considered.

BC is in the realm of Ops and is best handled with strong leadership (or at least advocacy) from the CFO, COO or GM - or the partners and owners in smaller firms. Management inadequately funds and supports BC unless it understands the risk and process in total beyond simply recovering IT systems or data.

Though it's often mentioned in the same breath with DR, BC is not an IT role, but ensuring the operational assurance of the key IT systems is.

K.M. Sreekumar, Consultant & Project engineer at Schnabel DC Consultants India Pvt Ltd:
IT is only an enabler to the business and business continuity though very critical it is not the business. Business overall is and should be the responsibility of the CEO, so we are back to square one CIO and CTO will only aid the BC plan and fully responsible for the IT and technology part. For example, CIO should not be responsible for even analysing the business impact of an IT black out. Secondly, threats to the business are varying in nature like pandemic, supplier lock outs, financial instability and very few have IT nature.

Another perspective would be to treat IT as a business and CIO be responsible for Business continuity of IT. Similar to what Christopher Furey wrote.

What are your thoughts on the role of the CIO and IT in relation to business continuity? Please share your experience by posting a comment here, or by continuing the discussion in the
Computer Room Design Group on LinkedIn.

Thursday, August 27, 2009

A Closer Look at PTS’ Data Center Education Series

Thanks to everyone who’s expressed interest in participating in our upcoming Data Center Education Series! The response has been very positive and we’re looking forward to the first session which will be held at our headquarters in Franklin Lakes, NJ from September 15 to 17, 2009.

A few of you have emailed me to ask for more information on what will be covered during the training sessions, so I’m posting the course descriptions here for your convenience:

Data Center Planning: Establishing a Floor Plan (Time: 2-3 hours) - A floor plan strongly affects the power density capability and electrical efficiency of a data center, yet many floor plans are established through incremental deployment without a central plan. Once a poor floor plan has been deployed, it is often difficult or impossible to recover the resulting loss of performance. This course provides structured floor plan guidelines for defining room layouts and for establishing IT equipment layouts within existing rooms.

Fire Protection Methods in the Data Center (Time: 1 hour) - Fire in any area of a business can result in millions of dollars of losses and even business failure, but fire in the data center represents one of the greatest risks to any company or institution. This is a foundational course which will introduce the basic theory, prevention, detection and suppression of fire specific to data centers. At the completion of this course you will have a better understanding of the safeguarding methods that are used to protect a data centers hottest commodity, information.

Fundamentals of Cooling (Time: 3-4 hours) - In every data center excess heat has the potential to create downtime. In addition, the performance and lifespan of IT equipment is directly related to the efficiency of cooling equipment. If you’re involved with the operation of computing equipment it's critical that you understand the importance of cooling in the data center environment. This foundational course explains the fundamentals of air conditioning systems, covering such topics as the refrigeration cycle, ideal gas law, condensation, convection and radiation, heat generation and transfer, and precision vs. comfort cooling.

Fundamentals of Power (Time: 3-4 hours) - Before you can understand the power needs of the Data Center, you must first understand the basic concepts and terms related to power measurement, electric power forms, and its generation. This elementary level course explains these power elements and some of today's power problems

Fundamentals of Physical Security (Time: 1 hour) - Today's Data Centers must consider not only network security, but also physical security. This course defines what physical security means for mission critical facilities and identifies what assets it needs to protect. Also discussed are the different means to control facility access, common physical security methods, security devices, and budget considerations related to physical security.

Cabling Strategies for the Data Center (Time: 2 hours) - From a cost perspective, building and operating a data center represents a significant piece of any Information Technology (IT) budget. The key to the success of any data center is the proper design and implementation of core critical infrastructure components. Cabling infrastructure, in particular, is an important area to consider when designing and managing any data center. The cabling infrastructure encompasses all data cables that are part of the data center, as well as all of the power cables necessary to ensure power to all of the loads. It is important to note that cable trays and cable management devices are critical to the support of IT infrastructure as they help to reduce the likelihood of downtime due to human error and overheating. This course will address the basics of cabling infrastructure and will discuss cabling installation practices, cable management strategies and cable maintenance practices. We will take an in-depth look at both data cabling and power cabling.

Data Center Management (Time: 2 hours) - There are a number of management tools currently available to help manage the data center from a number of perspectives - network, availability, asset management, infrastructure monitoring and control. Which of these tools are applicable to your data center? Which tools will best meet your needs?

Data Center Maintenance (Time: 2 hours) - Whether you own, rent or co-locate, whether your data center is 1,000 square feet or 100,000 square feet, whether you are dealing with legacy equipment or the latest high density configurations, you face the same issues with managing the maintenance of your equipment. Data center maintenance is essential to properly maintain and extend the life of your valuable data center infrastructure and prevent unplanned downtime, yet it is often relegated to spreadsheets and paper-based systems. All too often, critical maintenance is overlooked because someone didn’t remember to schedule it or have the right spare parts, tools or personnel available to properly perform the tasks required. This course will discuss the growing use computerized maintenance management systems (CMMS), including those designed specifically for the data center, and how the use of these systems can improve maintenance management in your data center.

Data Center Energy Efficiency (Time: 2 hours) - Is the concept of "greening" the data center hype or reality? This course will discuss practical and effective methods to make your data center more efficient to yield immediate cost savings.

Our instructors will tie in case studies and real world situations to provide concrete examples of how to apply the information learned in the course. Time each day will be spent on open discussion, allowing sharing of industry experience with your peers.

If you haven’t signed up already, please visit to reserve your seat. Priced at only $999 per student, the vendor-neutral, module based training includes all course materials in addition to a continental breakfast and lunch each day. SPECIAL OFFER: If you attend with other colleagues from work, you'll all receive a 10% discount.

Our goal is to create a training series that presents the topics of most interest and value to the student. That being said, we welcome suggestions for how we can continue to improve the series. Is a three (3) day training program a good fit for your schedule? Is there a course you'd like to see added? What type of lunch should we serve? Feel free to post a comment to tell us what you think.

Tuesday, August 18, 2009

Data Center Education Series Sept. Training - IMPORTANT UPDATES

The Data Center Education Series training event on September 15-17 has been moved from NYC to the PTS Headquarters in Franklin Lakes, New Jersey.

The event cost has also been changed and is now just $999 per attendee.

For more details and the full agenda, visit our Data Center Education Series page. Hope to see you there!

Expert Data Center Education & Training In NYC

Just a quick reminder for all our readers: PTS' Data Center Education Series is coming to midtown NYC from September 15-17.

UPDATE 08/20/2009: The PTS Data Center Education Series for September 15-17 has been relocated to our headquarters in Franklin Lakes, NJ.

The three (3) day class provides students with comprehensive, vendor-neutral, module based training that covers the most pertinent topics in the data center industry, tying in case studies and real world situations to provide the knowledge you need to understand, operate, manage, and improve your data center. The training includes all course materials in addition to a continental breakfast and lunch each day. (Best of all, if you attend with other colleagues from work, you all receive a 10% discount.)

To view the agenda and reserve your seat, please visit our website at

While I’m at it, I’d also like to take a moment to thank everyone who’s joined our Computer Room Design Group on LinkedIn. Your support and participation has helped the group get off to a great start, with over 300 data center and IT pros joining in the first month alone!

Here’s a quick snapshot of some of the recent discussions that have been posted:

  • Will the cloud kill the data center?
  • Hot & Cold Aisle Containment. How do you implement it when you have different cabs, heights and gaps?
  • Who really cares most about Enterprise Data Center Efficiency? CIO? CFO? IT?
  • TIA standard TIA-942: Tier - 2 takes 3- 6 months to implement, Tier - 3 takes 15 - 20 months to implement. Is this because of record keeping to demonstrate uptime?

Everyone is welcome to join! It’s a great opportunity to share news, ask questions, offer advice, and connect with your peers. Check it out at

Wednesday, July 29, 2009

Google Cools Data Center Without Chillers; Data Center Pros Weigh-in

Google’s chiller-less data center in Belgium has received a lot of buzz. The facility relies upon free air cooling to keep its servers cool and will shift the computing load to other data centers when the weather gets too hot.

It's an approach that stands to greatly improve energy efficiency. However, as e-shelter explained to Techworld, there are some risks. For instance, it's possible that airborne particulates could cause havoc with hard disk drives and dampness from heavy humidity could cause electrical problems. To see what other data center professionals think of this cooling strategy, I posed the following question to the Computer Room Design Group on LinkedIn:

Is Google's Chiller-less Data Center the wave of the future, or is this approach too risky for most businesses to accept?

Here’s what some of our group members had to say…

Mark Schwedel, Senior Project Manager at Commonwealth of Massachusetts:

Please note that Google is doing many thing that are not available in current data centers they do not have UPS - They are doing battery backup on each server with 12 volt battery - SO will this be the future? Only when the rest of world can delivery the same aspect as Google.

Sean Conner, Datacenter Professional Services Consultant:

Google's design is well suited for an expansion of their cloud environment. However, it's clear that the facility in question does not run as the same level of criticality as most dedicated or hardened sites. This works well in an environment that can tolerate minor equipment loss and failure.

However, most dedicated sites host applications and data that would suffer, should similar equipment loss occur. So, the two approaches cannot be truly compared. It's like trying to compare the heart to the left hand. Both are useful. But if the left hand fails, you probably don't die.

Perhaps a larger question to ask is: What applications, data, or entire enterprises could migrate to a cloud environment? Those that can stand to gain huge savings from Google's approach.

Dennis Cronin, Principal at Gilbane Mission Critical:

This entire dialog is moot because the way of the future is back to DIRECT WATER COOLED PROCESSORs. All these sites chasing the elusive "FREE" cooling will soon find out that they cannot support the next generation of technology.I suspect that there will be a lot of finger pointing when that occurs with even more adhoc solutions.We need to stick to quality solutions that will support today's AND tomorrow's technology requirements.

David Ibarra, Project Director at DPR Construction:

There is a tremendous pressure on large enterprise customers ( social, search,etc) to use the same fleet of servers for all of their applications. The IT architects behind the scene are now been asked to stop been "geeks" and changing hardware every 3 years and try to make use of what we have or improve with systems that are lower cost. The recession is also amplifying this trend. A lot of water cooled servers and demonstrations held last year have gone silent due to cost and also standardization on hardware for the next 5 years. A lot of large DC customers understand the water cooling technology and are early adopters; however realities have driven the effort elsewhere within their organizations. Customer are pushing high densities ( +300W/sqft) using best of class techniques: containments, free cooling,etc. Plus large scale operators are understanding that the building needs to suit the server needs so there is a shift on how a building is configured. Chiller less data centers have existed since 2006 in countries such as Canada, Ireland, Germany, Norway. Data centers will be coming online at the end of this year in the US that are chiller less and cooling tower less and with a extraordinary reduction of air moving equipment.

Nitin Bhatt, Sr. Engineer at (n)Code Solutions:

Every single Data Center is unique in its own set-up. To adopt some technology which is suiting to one geographical location could not be a wise decision. It is wise to be "Orthodox" rather than lossing the business. If someone can afford the outage / shifting of the work load to DR site or to some other sites as a result of the thermal events, yes they can look into FREE COOLING w/o Chillers. We can save the energy used by chillers having VFDs and room temperature based response to chillers. It is good to have chillers as backup to the Free Cooling.

So what do you think? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.

Wednesday, July 22, 2009

LinkedIn Discussion on Power Usage Effectiveness (PUE)

Last week I posted the following discussion question in our Computer Room Design networking group at I’m really impressed with the response from group members, so I’d like to share their thoughts with you here:

How can the industry address problems with the reporting of Power Usage Effectiveness (PUE) without undermining the usefulness of the metric?

In a recent post in Data Center Knowledge, Rich Miller points out that the value of Power Usage Effectiveness (PUE) as the leading 'green data center' metric "has become fuzzy due to a disconnect between companies’ desire to market their energy efficiency and the industry’s historic caution about disclosure." [Source:]

What are your thoughts on redefining PUE? Are additional refinements the answer? Or does increasing the complexity of PUE undermine the usefulness of the metric?


• Gordon Lane, Facilities Coordinator at Petro Canada, explained:
I don't see a real value in PUE.

If you leave unused servers powered on you can keep your PUE low.

Assume you have a PUE of 2
2MW total power consumption gives you 1 MW for servers.
If you can reduce your server consumption to 0.75MW by turning off comatose servers total consumption reduces to 1.75MW and gives you a PUE of 2.33

I know there would be some reduction in a/c power usage due to less heat output from the turned off servers but if you are using legacy a/c units with no VFD style control then you will not get a corresponding electrical consumption reduction.

• Scot Heath, Data Center Specialist, weighed in with:
PUE is difficult to measure in mixed facilities, is muddied by configurations such as the Google every-server-has-a-battery and varies widely with Tier level. A universal measurement that combines both IT capability (total Specmarks for example) and availability with respect to energy consumption would be most useful. PUE does have the advantage of being quite easily understood and for controlled comparisons (like tier level, etc.) is very useful.

• Dave Cole, Manager of Data Center Maintenance Management and Education Services at PTS, responded:
Gordon and Scot bring up very good points. I have mixed feelings about PUE. The concept is easily understood - we want to maximize the power that is actually used for IT work. The interpretation of the value is easy to understand - lower is better (or higher is better in the case of DCiE). The problem I see is that it's almost been made too simplistic. You still have to know your data center and the impact of the decisions you make in regards to design and operation. You can actually raise your PUE by virtualizing or by turning off ghost servers as Gordon pointed out. What needs to be understood is that when you lower the demand side, you should also be making corresponding changes to the supply side. At the end of the day, PUE can be valuable as long as you are also looking at what impacts the value. You need to be able to answer the question of WHY your PUE is changing.

What are your thoughts on the value of Power Usage Effectiveness (PUE) as a metric? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.

Monday, July 20, 2009

LinkedIn Discussion on Eliminating the Battery String

Thanks to everyone who’s participated in our Computer Room Design networking group at so far! We’re off to a great start, with over 200+ members joining in the first two weeks. I’d like to share highlights from one of our recent discussions…

Kevin Woods, Director of Business Development and Sales at i2i, asked:

Eliminating the Battery String? Does anyone have experience/opinion on the viability of the UPS/CPS systems? They incorporate a flywheel in between the generator and engine and in cases of power interruption, the flywheel uses kinetic energy to power the generator for up to 30 seconds while the engine is engaged.


• Mark Schwedel, business partner at EMC and advisor for Green Rack Systems, recommended taking a look at the patent for an improved UPS/CPS system, which employs a high-efficiency uninterrupted power supply function integrated with an engine-generator set that combines both short term protection against momentary power interruptions with longer term power generation.

• Gordon Lane, Facilities Coordinator at Petro Canada, shared his experience:
Not a direct comparison to gen/engine set up but I have a flywheel UPS system that has been in service for 23 years. Very reliable, change the bearings every 50000 hours - about 6 years - and we have just about completed a program of taking the MGs out for cleaning and re-insulation.

Obviously coming to end of life, 20 yrs was estimated life, but the serviceability has been phenomenal.

Certainly looking to replace with a similar system and I believe Caterpillar has a flywheel UPS solution that they integrate into their diesel offerings.

• Jason Schafer, Senior Analyst at Tier1 Research, explained in part:
My personal issue with flywheel solutions, aside from the reliability that both sides will argue, is that 30 seconds simply isn't enough time when you are talking about the criticality most datacenters need. The most common argument relates to allowing time to manually start a generator; and flywheel advocates will say "if a generator doesn't start in 30 seconds it's not very likely that it's going to start in 20 minutes" - I disagree with this. I've seen, on more than one occasion, where generator maintenance was being performed and through human error the EPO switch on the generator was mistakenly left pushed in. There's no way anyone is going to identify the problem and fix it in 30 seconds - I'd be surprised if anyone even got to the generator house in 30 seconds after a power outage. Minutes, however, are a different story.

I'm not saying that flywheels and CPSs don't have their place - I think they do, or rather will in large scale in datacenters, but we're not quite there yet. When virtualization plays a part in the redundancy and fault tolerance of a datacenter, where ride-through in the event of a power outage is more of a convenience than a necessity (a-la Google's datacenters - they can lose an entire facility and continue on for the most part), you'll see flywheels gain more traction.

What are your thoughts on the viability of the UPS/CPS systems? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.