Wednesday, December 16, 2009

The Devils in the Details - Enhanced SAN & Switching Solutions for Next Gen Data Centers

PTS is pleased to announce a new educational event, The Devils in the Details - Enhanced SAN & Switching Solutions for Next Generation Data Centers, in which we will introduce several new technology-based solutions, which will enhance data center optimization, consolidation, virtualization, and disaster recovery.

Prior to an upcoming New Jersey Devils versus Philadelphia Flyers hockey game we will leverage our understanding of the synergies between Facility and IT Infrastructure and introduce two highly efficient and cost effective solutions. These solutions can dramatically reduce the costs and complexity of your IT environment while increasing your ability to adapt, manage, and grow your storage and computing infrastructure. Learn about flexible, scalable solutions that will meet your business and security challenges and better understand how IT requirements drive new needs for your data center support infrastructure.

February 10, 2010
NJ Devils vs. Philadelphia Flyers
Prudential Center, Newark
Presentation with dinner/drinks starting at 5:00 PM
Game time 7:00 PM

Join us for an informative discussion and learn about:
  • PTS Data Center Solutions' strategic data center design approach combining both IT and support infrastructure expertise to design, manage and operate a superior data center.
  • Dell EqualLogic PS Series SANs designed to cost-effectively integrate advanced data and disaster protection features directly with VMware virtual infrastructure to help provide seamless data protection and disaster recovery management.
  • Enterasys S-Series® enterprise switching and routing solutions specifically designed for high speed core and SAN deployments.

Please RSVP by 1/5/2010. Tickets are limited and available on a first-come basis.

Data Center World, Spring 2010

PTS Data Center Solutions will be presenting and exhibiting at this spring’s Data Center World Event, held in Nashville from March 7-11. Data Center World is the largest global event of its kind and has been named one of the 50 fastest growing tradeshows in the U.S. It is the leading educational conference for data center professionals.

Our team will host roundtable discussion on Information Technology Infrastructure Library (ITIL) & ITSM Metrics Programs for the data center. This presentation will take a nuts and bolts approach to setting up an ITSM metrics program and will discuss how this process will allow IT to present data to senior management.

We’re also hosting a product information session, titled “Data Center Maintenance Management Software - Computerized Maintenance Management for the Data Center”, during which we’ll demonstrate how you can use best-in-class solutions to more effectively manage support infrastructure. The presentation will discuss Computerized Maintenance Management Systems (CMMS) and present our new Data Center Maintenance Management Software (DCMMS) Solution. This innovative software application from PTS Data Center Solutions allows the user to manage assets and parts, estimate and manage maintenance costs, track recurring problems to pinpoint those that may lead to more critical issues, and generate work orders with the details needed to properly perform preventative maintenance.

In addition, I’d like to invite you to visit us at booth #739 where you can get a first-hand look at our specially designed DCMMS solution. To learn more, please contact Amy Yencer at (201-337-3833 x128).

To register for the event, please visit See you in Nashville!

Monday, December 14, 2009

PTS Announces a Strategic Distribution Relationship with Dell Corporation

I’m excited to announce that PTS has launched a strategic distribution relationship with Dell Corporation which includes the full breadth of Dell products targeted for the small to mid-size business segment.

As a leading data center design and turnkey solutions provider, we’ve been approached by many clients asking us to help them reduce overall data center operational costs through power efficiency analysis and improvements. The relationship with Dell allows us to provide consultative support by focusing upon key technology energy drains in the data center, namely routing, server processing, storage and security-based infrastructure products.

By partnering with Dell, we see ourselves as partnering with a best-of-breed solutions provider for our mid-market clients. Depending upon client applications, a host of solutions such as the Dell EqualLogic iSCSI storage family and PowerEdge blade and rack servers can improve power efficiencies, support growth within the data center and provide superior price / performance returns.

To learn more, please contact us today.

Tuesday, December 08, 2009

‘Lights Out’ Data Center Management

In a recent post at The Data Center Journal, titled “Save some money – work with outsiders,” Rakesh Dogra discusses the new trend to minimize power bills using Lights-Out data center and remote management. [As a side note, way back in 2006 we blogged about how “dim” data center designs are a realistic goal for most companies. You can read that post here.]

Dogra explains that the use of these tactics can lead to major cost savings. He suggests that, looking at your IT, security and facilities staff, it is unwise to cut back on security personnel but it may be prudent to use remote management to replace portions of the IT staff. Additional benefits may include:
  • A lesser possibility of accidents and security breaches since fewer people will have physical access to a computer room.
  • Response time is boosted with remote bios level access to a data center’s servers.
  • Geographical independence can also be achieved through this system.

A potential downside of this system is that a “data center will need people within its premises too to fire fight something going wrong like outages. Also, a data center manager may not find someone with the required amount of experience and expertise to fend off crisis when it happens.”

It is surely a best practice to consider operating as ‘lights out’ a data center as possible, as the author suggests.

For PTS, the real secret to realizing operational costs savings from reduced energy consumption has less to do with facility based solutions than it does with IT. Our position is that there is far more operational cost savings potential coming from virtualizing servers and storage.

To prove the point, in 2010, PTS will perform a network re-design effort of our own operations and provide detailed documentation and analysis of the before and after conditions of our data center energy usage. So, stay tuned...

Friday, November 20, 2009

Data Center Education Series Expands to More Dates, Cities

I'm pleased to announce the expansion of our Data Center Education Series to include more dates and cities.

If you're not already familiar with the program, our Data Center Education Series provides students with comprehensive, vendor-neutral, module based training led by the data center design experts from PTS. The training series discusses the most pertinent topics in the data center industry, tying in case studies and real world situations to provide the knowledge IT professionals need to understand, operate, manage, and improve their data centers – ultimately reducing operating costs and improving service delivery to users.

For instance, the Data Center Infrastructure Management course will show attendees:
  • Power and cooling infrastructure in the data center and how hardware and configuration impact energy efficiency and availability
  • Methods to improve data center energy efficiency
  • Management tools available to help you optimize data center performance and availability
  • Practical steps to implement ITIL
  • How to measure the IT Service Management metrics that really matter
  • How to monitor your data center to optimize performance and availability
  • What impacts data center availability and how you can improve it
The course schedule for the first half of 2010 is as follows:
  • Jan 17 - 19, 2010 in San Francisco, CA
  • Jan 25 - 27, 2010 in Washington, DC
  • Feb 8 - 10, 2010 in Chicago (Schaumburg), IL
  • Feb 22 - 24, 2010 in Dallas, TX
  • Mar 15 - 17, 2010 in Ottawa, ON
  • Mar 22 - 24, 2010 in San Jose, CA
  • Apr 19 - 21, 2010 in Washington, DC
  • Apr 26 - 28, 2010 in New York, NY
  • May 3 - 5, 2010 in Chicago (Schaumburg), IL
  • May 10 - 12, 2010 in Atlanta, GA
  • May 17 - 19, 2010 in Dallas, TX
To learn more about the Data Center Infrastructure Management course and to register, visit

Related courses, taught by experts in each field, are also available and include:
  • How to Get Started with ITIL (Information Technology Infrastructure Library)
  • ITIL Service Capability: Planning, Protection and Optimization
  • ITIL Service Capability: Service Offerings and Agreements
  • ITIL Service Catalog
  • ITIL Service Lifecycle: Service Strategy
  • ITIL v3 Foundation
  • Understanding Networking Fundamentals
  • TCP/IP Networking
  • Telecommunications Fundamentals
  • Voice over IP Foundations
For more information regarding each of the courses including costs and the dates and cities where they are available, visit our Data Center Education website.

Friday, November 06, 2009

PTS Data Center Solutions Showcase

PTS' growth of solutions to design, build and manage the data center has never been stronger.This post showcases two industry-leading solutions that you may want to consider for your own data center.

Energy Monitoring Systems

Would device-level power consumption monitoring help you manage costs for effectively?

Working in conjunction with Packet Power, PTS is pleased to announce a cost effective per-device energy monitoring system which is easy to deploy and highly accurate. The system provides device level monitoring & trending without having to change out power supplies or PDU's.

Features include:
  • All billing-quality power monitoring hardware is built into standard equipment power supply cables.
  • All standard cable connector con-figurations (C13/C14, C19/C20 etc.) as well as voltages and current loads are available. PP Monitoring Nodes
  • Data collection network automatically supports thousands of devices in a single facility, is configuration-free, entirely wireless, secure and operates independently of any Wi-Fi or other networking infrastructure.
  • All information gathered by our system and all advanced monitoring, billing and management functions are accessible via the web & e-mail.
  • All available without any additional hardware or software.
  • All information generated by the system can be integrated with your existing operations management and billing systems.
Learn More...

Air Curtains - A Green Alternative

Reduce data center cooling costs by directing cold air where it is needed most - through the computer racks! Air Curtains and strip doors separate cold air and warm air aisles, maximizing the dynamics of air flow to cool your data center. A system can pay for itself in months!

  • Save energy on both air conditioning and fan systems - 15% and 67% respectively (according to a study by the Lawrence Berkeley National Laboratory)
  • The Air Curtain product line includes transparent curtains, strip doors, panels and patented hardware; create a solution specific to your needs.
  • Specially formulated vinyls are low-outgassing and anti-static while meeting ASTM and NFPA fire retardancy requirements.
  • Hardware is also designed so curtains fall away in the case of fire, allowing fire sprinklers full operating range.
Learn More...

And remember, PTS typically designs these solutions and performs Computational Fluid Dynamics (CFD) modeling prior to deployment to guarantee the results, savings & performance. To learn more, please contact us today.

Wednesday, November 04, 2009

Introducing PTS' Information Technology Solutions Group

For years our team has provided exceptional service to analyze, survey, design, plan, commission and manage Data Centers for our clients. We are now pleased to leverage our expertise in All Things Data Center to launch an exciting new division, PTS Information Technology Solutions Group (ITSG).

ITSG provides information technology based consulting, design, implementation/integration, and ongoing support services as well as IT infrastructure solutions to companies nationwide. ITSG extends PTS' data center expertise beyond facility planning, design, engineering, construction, and maintenance to include service and solutions pertaining to:
  • LAN/WAN Networking
  • Information/Network Security
  • Servers & Systems
  • Virtualization Technologies
  • Enterprise Storage
  • Unified Communications
  • Software
  • Application Development

ITSG's services and solutions are tailored specifically to the needs of our client's project needs, including:
  • Technology Roadmaps
  • Data Center Relocation
  • Consolidation
  • Technology Refresh

ITSG follows our proven project delivery process:

PTS' goal is to provide our clients with 100% turnkey, people, process, and technology solutions from data center facility to IT operations.

ITSG will be led by Rich Horowitz, an industry veteran, who has been involved in all facets of the technology industry for more than 20 years. Rich is actively involved in business development, operations, Channel Partner development, Mergers & Acquisitions, and services delivery. Rich has been involved in approx $700 million in technology hardware sales, software sales and technical services engagements. Rich will be responsible for establishing and strengthening the PTS IT Solutions Group brand, and working with our clients to understand their needs and how we can provide value to them.

To learn more, please contact us today.

Monday, November 02, 2009

Intel's Active Management Technology {AMT} provides KVM access & console access eliminating the need for external KVMs or console servers?

There has been a lot of talk in the industry on how Intels new onboard AMT could replace service processors, such as; ILO, DRAC, RSA & ILOM.


According to the Blog the local user has to allow the remote user in so I’m not sure this is a valid KVM or ILO replacement as much as a replacement for desktop tools like PCAnywhere & GoToMyPC?

If Intel does have a strategy to lead server remote access & control with AMT, I don't believe it will work. 1st of all, I'd like to point out AMT is an Intel product so it isn't an open standard for a management console. What about those who are buying AMD Opteron processors and/or Sun UltraSparc?

Secondly an open standard for server managment is already well underway from 1998 with IPMI & I think we need to look at what has transpired with IPMI to see what if any support will be given to AMT at the server level. IPMI was originally proposed in 1998 and driven by market leaders Intel, Dell, HP and NEC. Since then IPMI has been adopted by more than 150 other companies, including IBM, Sun, and every major server platform vendor. IPMI is now on its third major release. A significant percentage of rackoptimized servers and most blade computing platforms now include some form of built-in server processor technology that can work with IPMI. Obviously, IPMI data from across the enterprise can only be useful if management teams can view it from a common console. Otherwise, it would offer no advantages over a fragmented, vendor-specific management architecture. Thus, to take full advantage of IPMI, management teams need a solution that 1) delivers aggregated IPMI data to a single application, and 2) supports the IPMI implementations of different vendors.

This second point is critical. While most server vendors include the IPMI protocol in their platforms, they often hide it behind proprietary software/firmware extensions and/or bundled management solutions. An effective server management solution must be able to handle these variations in IPMI implementation in order to provide a unified view into the computing environment. My point is if AMT is to be successful like IPMI the Server OEM's are going to build their own management tools around it to differentiate themselves. Then there will also be 3rd party vendors that build central management tools to centralize access to the different Server OEM's tools that leverage AMT just as there was for IPMI. However, I'm not sure I see all of this happening for AMT because it is proprietary to Intel. IPMI is already included on most systems for these system mangement & diagnostic purposes. The Server OEM's have invested heavily in tools like ILO, DRAC, RSA & ILOM to take advantage of the IPMI chipset. Unlike AMT, IPMI is independent of the CPU and thus independent of a CPU chip failure and can be run on most systems out of band on a separate NIC. Although a few years old. Here is a good whitepaper that covers the development of IPMI and what has occured with its development.$FILE/IPMI+WP_5+Reasons+to+Cap_0406.pdf

Monday, October 19, 2009

The Devils in the Details - Data Center Management Event

Managing a data center is tough. With all its complexity, just keeping track of your assets can be a full time job, not to mention finding opportunities to run the data center more efficiently.

To help you do your job more efficiently, PTS Data Center Solutions and Raritan are teaming up to host a Data Center Education seminar on November 4th, starting at 5pm, at the Prudential Center in Newark. And, since all work and no play makes for a dull evening, after the seminar we’ll head to a private box at the rink for dinner, drinks, and an evening of fun watching the NJ Devils play the Washington Capitals.

I’ll kick off the night with a presentation on leading edge solutions that are available to improve data center availability and management. Khaled Nassoura, General Manager of the Green Data Center Initiative at Raritan, will also give a presentation on how to optimize data center operations with dcTrack™.

We’ll cover the latest trends in data center management, including new approaches to asset management, tracking and maintenance. Plan to learn about DC Infrastructure Management (DCIM) and DC Monitoring Systems (DCMS) which offer you broad and deep visibility into your operations in real time, as well as allow you to plan for growth and change by optimizing your current operations, assets and infrastructure.

Please RSVP by 10/23/2009. Tickets are limited and available on a first come basis. To learn more, please visit the Data Center Management event page or contact Amy Yencer at (201-337-3833 x128). See you at the rink!

Wednesday, September 30, 2009

New York Jets Power Camp 2009

Thank you to everyone who joined PTS Data Center Solutions and the New York Jets last night at Power Camp 2009, hosted at the new Jets Training Facility in Florham Park, NJ.

We kicked off the training event with the Power Players Buffet … after all, if you want to be a pro you have to eat like a pro. There were about 80 people in attendance and it was great getting the opportunity to talk with everyone.

Together with the folks from APC, Avocent and Packet Power, we tackled a range of data center power issues during our Power Drills, including techniques for effective management, monitoring, availability and control.

Mike Petrino, vice president of PTS, gave the crowd a tour of the data center we designed for the NY Jets Training Facility:

All in all, the Power Camp training event was a huge success. Highlights for me included our field goal kicking contest, hanging out with NY Jets legend Bruce Harper and coaching my junior football team, the Franklin Lakes War Eagles, during a scrimmage on the Jets practice field under the lights.

Talking with Bruce Harper, the all-time kick returner in New York Jets history, at Power Camp:

Coaching the Franklin Lakes War Eagles on the Jets practice field:

Field goal kicking contest for attendees of PTS' Power Camp:

I hope everyone who attended enjoyed the event as much as I did. If you want to see more photos from this year’s Power Camp, please visit our Facebook Page at

Sunday, September 27, 2009

Inflection point: Build for Higher Density or Plan for Efficient IT?

Over the last decade, the focus of the Data Center Industry has been to plan & renovate feverishly to support higher densities. Not too much of a surprise because there was actually an uptick in the scale of Morse's Law over the last decade as processing power, processing density & power consumption per rack unit all had risen faster than the industry had ever experienced.

Over the last few years the server manufacturers started to pay attention to power consumption as many of their clients couldn't deploy the new technology or had to wait until renovations or new facilities became available to upgrade to the newer servers that consumed more power in a smaller footprint. You are starting to see some products on the market that reverse the decade long trend & use less power. From innovations in operating systems that fine tune power usage as shown in this recent article by IBM:

To Intel with its new Xeon 5500 series processors that is delivering up to 2.25x better performance and up to 3.5x improved system bandwidth are delivered in the same power envelope compared to Intel®Xeon®processor 5400. This processor also uses up to 50% lower idle power consumption during low utilization periods.

What is this forward thinking leading to? I believe we are going to cross the inflection point in the next couple of years where the high density environments we have or are constructing will outpace the power consumption demand of the new processors & servers we will need to deploy. It is difficult to say exactly when the big power saving breakthrough will happen at the chip level, but I think we all know it will happen. You don't want to be the last guy who built a MW facility @ 300 watts per square foot that now only needs 500KW & 150 watts per square floor. We often consider modular solutions that can scale up our density & capacity, but keep in mind that someday soon we may need to consume less power & cooling so we should make sure that our design is efficient at 50% or 30% of our design as well. Not just due to the inflection point where server power consumption will drop below data center power demand that Julius Neudorfer describes in the below article, but because our business requirements can also change where we won't need as much processing power to run our business.

Friday, September 18, 2009

PTS & The New York Jets Invite You to Power Camp '09

PTS, in collaboration with the New York Jets, is excited to invite you to Power Camp ’09. Tackle power issues before they result in a defensive meltdown and make sure that your Data Center is powered up for many more winning seasons!

The three hour Power Camp includes a buffet dinner and 3 intense drills that teach the latest techniques and solutions for effective power monitoring and control, followed by a tour of the state-of-the-art data center PTS engineered and built for the New York Jets. Be sure to stay for the field goal kicking contest and to meet famous NY Jet, Bruce Harper!

For more information and to view the agenda, please visit our website at

If you’d like to attend Power Camp ’09, please RSVP by 9/23/2009 to Amy Yencer,, 201-337-3833 x128.

Wednesday, September 02, 2009

Role of the CIO in Business Continuity, Disaster Recovery

Ralph DeFrangesco at ITBusinessEdge posted the following discussion question in their forums recently.
Corporations often confuse business continuity and disaster recovery. They also tend to put the CIO in charge of both. Should the CIO be the point person for both BC and DR? If so, why? If not why and who should it be?
It resulted in an interesting debate on the role of the CIO, so I reposted it on LinkedIn for so the members of our Computer Room Design Group could weigh in. Here are some of the insights they had to share...

Ken Cameron, IT Infrastructure & Outsourcing Executive:
The CIO should own Disaster Recovery. The business side (someone in Risk Management, Corporate Security, etc.) should own Business Continuity. The IT group should be represented on the Business Continuity council. IT plays a major role in Business Continuity, but does NOT own it.

IF the CIO gets Business Continuity, it needs to be made clear that his BCP responsibility is NOT part of his IT responsibility.

Christopher Furey, Managing Partner at Imaginamics:
This is one of those issues where it's a bit like asking the fox to watch the hen house. Only very small or inexperienced management teams put IT in charge of BC. The scope of the risk analysis is usually way beyond the skills of an IT Director or CIO, and even when it's not, business risk oversight is critical.

Ken is spot on. The CIO must be in charge of DR and IT Systems Continuity but not BC. Any CIO who wants to keep their job will work in tandem with Risk Management and key stakeholders on the business side to ensure critical business functions and the systems that support them are well considered.

BC is in the realm of Ops and is best handled with strong leadership (or at least advocacy) from the CFO, COO or GM - or the partners and owners in smaller firms. Management inadequately funds and supports BC unless it understands the risk and process in total beyond simply recovering IT systems or data.

Though it's often mentioned in the same breath with DR, BC is not an IT role, but ensuring the operational assurance of the key IT systems is.

K.M. Sreekumar, Consultant & Project engineer at Schnabel DC Consultants India Pvt Ltd:
IT is only an enabler to the business and business continuity though very critical it is not the business. Business overall is and should be the responsibility of the CEO, so we are back to square one CIO and CTO will only aid the BC plan and fully responsible for the IT and technology part. For example, CIO should not be responsible for even analysing the business impact of an IT black out. Secondly, threats to the business are varying in nature like pandemic, supplier lock outs, financial instability and very few have IT nature.

Another perspective would be to treat IT as a business and CIO be responsible for Business continuity of IT. Similar to what Christopher Furey wrote.

What are your thoughts on the role of the CIO and IT in relation to business continuity? Please share your experience by posting a comment here, or by continuing the discussion in the
Computer Room Design Group on LinkedIn.

Thursday, August 27, 2009

A Closer Look at PTS’ Data Center Education Series

Thanks to everyone who’s expressed interest in participating in our upcoming Data Center Education Series! The response has been very positive and we’re looking forward to the first session which will be held at our headquarters in Franklin Lakes, NJ from September 15 to 17, 2009.

A few of you have emailed me to ask for more information on what will be covered during the training sessions, so I’m posting the course descriptions here for your convenience:

Data Center Planning: Establishing a Floor Plan (Time: 2-3 hours) - A floor plan strongly affects the power density capability and electrical efficiency of a data center, yet many floor plans are established through incremental deployment without a central plan. Once a poor floor plan has been deployed, it is often difficult or impossible to recover the resulting loss of performance. This course provides structured floor plan guidelines for defining room layouts and for establishing IT equipment layouts within existing rooms.

Fire Protection Methods in the Data Center (Time: 1 hour) - Fire in any area of a business can result in millions of dollars of losses and even business failure, but fire in the data center represents one of the greatest risks to any company or institution. This is a foundational course which will introduce the basic theory, prevention, detection and suppression of fire specific to data centers. At the completion of this course you will have a better understanding of the safeguarding methods that are used to protect a data centers hottest commodity, information.

Fundamentals of Cooling (Time: 3-4 hours) - In every data center excess heat has the potential to create downtime. In addition, the performance and lifespan of IT equipment is directly related to the efficiency of cooling equipment. If you’re involved with the operation of computing equipment it's critical that you understand the importance of cooling in the data center environment. This foundational course explains the fundamentals of air conditioning systems, covering such topics as the refrigeration cycle, ideal gas law, condensation, convection and radiation, heat generation and transfer, and precision vs. comfort cooling.

Fundamentals of Power (Time: 3-4 hours) - Before you can understand the power needs of the Data Center, you must first understand the basic concepts and terms related to power measurement, electric power forms, and its generation. This elementary level course explains these power elements and some of today's power problems

Fundamentals of Physical Security (Time: 1 hour) - Today's Data Centers must consider not only network security, but also physical security. This course defines what physical security means for mission critical facilities and identifies what assets it needs to protect. Also discussed are the different means to control facility access, common physical security methods, security devices, and budget considerations related to physical security.

Cabling Strategies for the Data Center (Time: 2 hours) - From a cost perspective, building and operating a data center represents a significant piece of any Information Technology (IT) budget. The key to the success of any data center is the proper design and implementation of core critical infrastructure components. Cabling infrastructure, in particular, is an important area to consider when designing and managing any data center. The cabling infrastructure encompasses all data cables that are part of the data center, as well as all of the power cables necessary to ensure power to all of the loads. It is important to note that cable trays and cable management devices are critical to the support of IT infrastructure as they help to reduce the likelihood of downtime due to human error and overheating. This course will address the basics of cabling infrastructure and will discuss cabling installation practices, cable management strategies and cable maintenance practices. We will take an in-depth look at both data cabling and power cabling.

Data Center Management (Time: 2 hours) - There are a number of management tools currently available to help manage the data center from a number of perspectives - network, availability, asset management, infrastructure monitoring and control. Which of these tools are applicable to your data center? Which tools will best meet your needs?

Data Center Maintenance (Time: 2 hours) - Whether you own, rent or co-locate, whether your data center is 1,000 square feet or 100,000 square feet, whether you are dealing with legacy equipment or the latest high density configurations, you face the same issues with managing the maintenance of your equipment. Data center maintenance is essential to properly maintain and extend the life of your valuable data center infrastructure and prevent unplanned downtime, yet it is often relegated to spreadsheets and paper-based systems. All too often, critical maintenance is overlooked because someone didn’t remember to schedule it or have the right spare parts, tools or personnel available to properly perform the tasks required. This course will discuss the growing use computerized maintenance management systems (CMMS), including those designed specifically for the data center, and how the use of these systems can improve maintenance management in your data center.

Data Center Energy Efficiency (Time: 2 hours) - Is the concept of "greening" the data center hype or reality? This course will discuss practical and effective methods to make your data center more efficient to yield immediate cost savings.

Our instructors will tie in case studies and real world situations to provide concrete examples of how to apply the information learned in the course. Time each day will be spent on open discussion, allowing sharing of industry experience with your peers.

If you haven’t signed up already, please visit to reserve your seat. Priced at only $999 per student, the vendor-neutral, module based training includes all course materials in addition to a continental breakfast and lunch each day. SPECIAL OFFER: If you attend with other colleagues from work, you'll all receive a 10% discount.

Our goal is to create a training series that presents the topics of most interest and value to the student. That being said, we welcome suggestions for how we can continue to improve the series. Is a three (3) day training program a good fit for your schedule? Is there a course you'd like to see added? What type of lunch should we serve? Feel free to post a comment to tell us what you think.

Tuesday, August 18, 2009

Data Center Education Series Sept. Training - IMPORTANT UPDATES

The Data Center Education Series training event on September 15-17 has been moved from NYC to the PTS Headquarters in Franklin Lakes, New Jersey.

The event cost has also been changed and is now just $999 per attendee.

For more details and the full agenda, visit our Data Center Education Series page. Hope to see you there!

Expert Data Center Education & Training In NYC

Just a quick reminder for all our readers: PTS' Data Center Education Series is coming to midtown NYC from September 15-17.

UPDATE 08/20/2009: The PTS Data Center Education Series for September 15-17 has been relocated to our headquarters in Franklin Lakes, NJ.

The three (3) day class provides students with comprehensive, vendor-neutral, module based training that covers the most pertinent topics in the data center industry, tying in case studies and real world situations to provide the knowledge you need to understand, operate, manage, and improve your data center. The training includes all course materials in addition to a continental breakfast and lunch each day. (Best of all, if you attend with other colleagues from work, you all receive a 10% discount.)

To view the agenda and reserve your seat, please visit our website at

While I’m at it, I’d also like to take a moment to thank everyone who’s joined our Computer Room Design Group on LinkedIn. Your support and participation has helped the group get off to a great start, with over 300 data center and IT pros joining in the first month alone!

Here’s a quick snapshot of some of the recent discussions that have been posted:

  • Will the cloud kill the data center?
  • Hot & Cold Aisle Containment. How do you implement it when you have different cabs, heights and gaps?
  • Who really cares most about Enterprise Data Center Efficiency? CIO? CFO? IT?
  • TIA standard TIA-942: Tier - 2 takes 3- 6 months to implement, Tier - 3 takes 15 - 20 months to implement. Is this because of record keeping to demonstrate uptime?

Everyone is welcome to join! It’s a great opportunity to share news, ask questions, offer advice, and connect with your peers. Check it out at

Wednesday, July 29, 2009

Google Cools Data Center Without Chillers; Data Center Pros Weigh-in

Google’s chiller-less data center in Belgium has received a lot of buzz. The facility relies upon free air cooling to keep its servers cool and will shift the computing load to other data centers when the weather gets too hot.

It's an approach that stands to greatly improve energy efficiency. However, as e-shelter explained to Techworld, there are some risks. For instance, it's possible that airborne particulates could cause havoc with hard disk drives and dampness from heavy humidity could cause electrical problems. To see what other data center professionals think of this cooling strategy, I posed the following question to the Computer Room Design Group on LinkedIn:

Is Google's Chiller-less Data Center the wave of the future, or is this approach too risky for most businesses to accept?

Here’s what some of our group members had to say…

Mark Schwedel, Senior Project Manager at Commonwealth of Massachusetts:

Please note that Google is doing many thing that are not available in current data centers they do not have UPS - They are doing battery backup on each server with 12 volt battery - SO will this be the future? Only when the rest of world can delivery the same aspect as Google.

Sean Conner, Datacenter Professional Services Consultant:

Google's design is well suited for an expansion of their cloud environment. However, it's clear that the facility in question does not run as the same level of criticality as most dedicated or hardened sites. This works well in an environment that can tolerate minor equipment loss and failure.

However, most dedicated sites host applications and data that would suffer, should similar equipment loss occur. So, the two approaches cannot be truly compared. It's like trying to compare the heart to the left hand. Both are useful. But if the left hand fails, you probably don't die.

Perhaps a larger question to ask is: What applications, data, or entire enterprises could migrate to a cloud environment? Those that can stand to gain huge savings from Google's approach.

Dennis Cronin, Principal at Gilbane Mission Critical:

This entire dialog is moot because the way of the future is back to DIRECT WATER COOLED PROCESSORs. All these sites chasing the elusive "FREE" cooling will soon find out that they cannot support the next generation of technology.I suspect that there will be a lot of finger pointing when that occurs with even more adhoc solutions.We need to stick to quality solutions that will support today's AND tomorrow's technology requirements.

David Ibarra, Project Director at DPR Construction:

There is a tremendous pressure on large enterprise customers ( social, search,etc) to use the same fleet of servers for all of their applications. The IT architects behind the scene are now been asked to stop been "geeks" and changing hardware every 3 years and try to make use of what we have or improve with systems that are lower cost. The recession is also amplifying this trend. A lot of water cooled servers and demonstrations held last year have gone silent due to cost and also standardization on hardware for the next 5 years. A lot of large DC customers understand the water cooling technology and are early adopters; however realities have driven the effort elsewhere within their organizations. Customer are pushing high densities ( +300W/sqft) using best of class techniques: containments, free cooling,etc. Plus large scale operators are understanding that the building needs to suit the server needs so there is a shift on how a building is configured. Chiller less data centers have existed since 2006 in countries such as Canada, Ireland, Germany, Norway. Data centers will be coming online at the end of this year in the US that are chiller less and cooling tower less and with a extraordinary reduction of air moving equipment.

Nitin Bhatt, Sr. Engineer at (n)Code Solutions:

Every single Data Center is unique in its own set-up. To adopt some technology which is suiting to one geographical location could not be a wise decision. It is wise to be "Orthodox" rather than lossing the business. If someone can afford the outage / shifting of the work load to DR site or to some other sites as a result of the thermal events, yes they can look into FREE COOLING w/o Chillers. We can save the energy used by chillers having VFDs and room temperature based response to chillers. It is good to have chillers as backup to the Free Cooling.

So what do you think? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.

Wednesday, July 22, 2009

LinkedIn Discussion on Power Usage Effectiveness (PUE)

Last week I posted the following discussion question in our Computer Room Design networking group at I’m really impressed with the response from group members, so I’d like to share their thoughts with you here:

How can the industry address problems with the reporting of Power Usage Effectiveness (PUE) without undermining the usefulness of the metric?

In a recent post in Data Center Knowledge, Rich Miller points out that the value of Power Usage Effectiveness (PUE) as the leading 'green data center' metric "has become fuzzy due to a disconnect between companies’ desire to market their energy efficiency and the industry’s historic caution about disclosure." [Source:]

What are your thoughts on redefining PUE? Are additional refinements the answer? Or does increasing the complexity of PUE undermine the usefulness of the metric?


• Gordon Lane, Facilities Coordinator at Petro Canada, explained:
I don't see a real value in PUE.

If you leave unused servers powered on you can keep your PUE low.

Assume you have a PUE of 2
2MW total power consumption gives you 1 MW for servers.
If you can reduce your server consumption to 0.75MW by turning off comatose servers total consumption reduces to 1.75MW and gives you a PUE of 2.33

I know there would be some reduction in a/c power usage due to less heat output from the turned off servers but if you are using legacy a/c units with no VFD style control then you will not get a corresponding electrical consumption reduction.

• Scot Heath, Data Center Specialist, weighed in with:
PUE is difficult to measure in mixed facilities, is muddied by configurations such as the Google every-server-has-a-battery and varies widely with Tier level. A universal measurement that combines both IT capability (total Specmarks for example) and availability with respect to energy consumption would be most useful. PUE does have the advantage of being quite easily understood and for controlled comparisons (like tier level, etc.) is very useful.

• Dave Cole, Manager of Data Center Maintenance Management and Education Services at PTS, responded:
Gordon and Scot bring up very good points. I have mixed feelings about PUE. The concept is easily understood - we want to maximize the power that is actually used for IT work. The interpretation of the value is easy to understand - lower is better (or higher is better in the case of DCiE). The problem I see is that it's almost been made too simplistic. You still have to know your data center and the impact of the decisions you make in regards to design and operation. You can actually raise your PUE by virtualizing or by turning off ghost servers as Gordon pointed out. What needs to be understood is that when you lower the demand side, you should also be making corresponding changes to the supply side. At the end of the day, PUE can be valuable as long as you are also looking at what impacts the value. You need to be able to answer the question of WHY your PUE is changing.

What are your thoughts on the value of Power Usage Effectiveness (PUE) as a metric? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.

Monday, July 20, 2009

LinkedIn Discussion on Eliminating the Battery String

Thanks to everyone who’s participated in our Computer Room Design networking group at so far! We’re off to a great start, with over 200+ members joining in the first two weeks. I’d like to share highlights from one of our recent discussions…

Kevin Woods, Director of Business Development and Sales at i2i, asked:

Eliminating the Battery String? Does anyone have experience/opinion on the viability of the UPS/CPS systems? They incorporate a flywheel in between the generator and engine and in cases of power interruption, the flywheel uses kinetic energy to power the generator for up to 30 seconds while the engine is engaged.


• Mark Schwedel, business partner at EMC and advisor for Green Rack Systems, recommended taking a look at the patent for an improved UPS/CPS system, which employs a high-efficiency uninterrupted power supply function integrated with an engine-generator set that combines both short term protection against momentary power interruptions with longer term power generation.

• Gordon Lane, Facilities Coordinator at Petro Canada, shared his experience:
Not a direct comparison to gen/engine set up but I have a flywheel UPS system that has been in service for 23 years. Very reliable, change the bearings every 50000 hours - about 6 years - and we have just about completed a program of taking the MGs out for cleaning and re-insulation.

Obviously coming to end of life, 20 yrs was estimated life, but the serviceability has been phenomenal.

Certainly looking to replace with a similar system and I believe Caterpillar has a flywheel UPS solution that they integrate into their diesel offerings.

• Jason Schafer, Senior Analyst at Tier1 Research, explained in part:
My personal issue with flywheel solutions, aside from the reliability that both sides will argue, is that 30 seconds simply isn't enough time when you are talking about the criticality most datacenters need. The most common argument relates to allowing time to manually start a generator; and flywheel advocates will say "if a generator doesn't start in 30 seconds it's not very likely that it's going to start in 20 minutes" - I disagree with this. I've seen, on more than one occasion, where generator maintenance was being performed and through human error the EPO switch on the generator was mistakenly left pushed in. There's no way anyone is going to identify the problem and fix it in 30 seconds - I'd be surprised if anyone even got to the generator house in 30 seconds after a power outage. Minutes, however, are a different story.

I'm not saying that flywheels and CPSs don't have their place - I think they do, or rather will in large scale in datacenters, but we're not quite there yet. When virtualization plays a part in the redundancy and fault tolerance of a datacenter, where ride-through in the event of a power outage is more of a convenience than a necessity (a-la Google's datacenters - they can lose an entire facility and continue on for the most part), you'll see flywheels gain more traction.

What are your thoughts on the viability of the UPS/CPS systems? Please share your experience by posting a comment here, or by continuing the discussion in the Computer Room Design Group on LinkedIn.

Thursday, July 16, 2009

Introducing PTS’ Data Center Education Series

How extensive is your knowledge about all aspects of your data center? With our newly launched Data Center Education Series, you will never look at your IT and support infrastructure the same way again.

PTS’ Data Center Education Series will help you better assess problems in your data center by providing you with substantive knowledge that you can take back to your data center to improve operations, availability, and efficiency - ultimately reducing operating cost and improving service delivery to your users.

The education series provides students with comprehensive, vendor-neutral, module based training led by the data center design experts from PTS. We discuss the most pertinent topics in the data center industry, tying in case studies and real world situations to provide the knowledge you need to understand, operate, manage, and improve your data center.

The Standard Training Series is a three (3) day class held multiple times per year at major cities across the United States, Canada, and Europe. Our next session will take place in Midtown NYC from September 15-17th -- visit our site to view the agenda. Can’t make it to NYC? We'll also be coming to Chicago (October 21-23) and Dallas (December 7-9). I encourage you to reserve your seat today, as space is limited.

The education series will cover the following topics:

• Fundamentals of Data Center Cooling
• Fundamentals of Data Center Management
• Fundamentals of Physical Security
• Fundamentals of Fire Protection
• Fundamentals of Data Center Power
• Fundamentals of Data Center Maintenance
• Fundamentals of Designing a Floor Plan
• Fundamentals of Data Center Cabling
• Fundamentals of Energy Efficiency

Priced at only $1,795 per student, the training includes all course materials in addition to a continental breakfast and lunch each day. Additionally if you attend with other colleagues from work, you'll all receive a 10% discount. You'll realize an ROI quickly from this invaluable and intimate knowledge, in which straight from data center experts in this in-depth, intimate training series.

Data Center Education Series – Customized for your needs!

We also offer education programs customized to your IT team’s needs. If you have a large group and need training, we can come to you and present those topics of most interest to you! Choose your desired location (typically your own facility). Choose the topics you want to see, including any or all of the available topics from the standard 3-day training class.

In addition, if you have a topic in mind you don't see currently listed in our offerings, we'll build it for you for only a nominal fee to cover time and material costs.

The Customized Training Series is priced at $15,000 for 2 days or $20,000 for 3 days plus travel expenses. In addition to the training, you have option to purchase a one-day data center site assessment for $5,000. This assessment will be performed prior to the training in order to allow the training to address issues found in the assessment.

Please join us on LinkedIn & Twitter

PTS is excited to provide our peers with a new online forum in which to discuss the planning, design, engineering, and construction of data centers and computer rooms.

If you’ve been reading our blog for a while, you may already be aware of our Facebook Page at (A big ‘thank you’ to everyone who’s added themselves as fans!) Today, I’m happy to announce that PTS is further expanding our online presence with the goal of facilitating the open exchange of ideas among small-to-medium sized data center and computer room operators.

At the forefront of this effort is the newly created Computer Room Design networking group on You can check it out by visiting Hosted by the consultants and engineers at PTS Data Center Solutions, the group is an open forum in which professionals can share industry-related news, ideas, issues and experiences.

Membership is free and open to all professionals and vendors in the computer room and data center industry. We hope that industry leaders will look at this as an opportunity to share knowledge, discover new services and opportunities, and expand their networks.

So far, our networking group on has attracted broad interest, gaining more than 100 members in the first week alone. Featured discussions include best practices for consolidation strategies, how to combat downtime in the data center, and industry concerns regarding the Power Usage Effectiveness (PUE) metric.

This thought leadership is further supported on PTS’ Twitter profile ( which features the latest industry news, highlights from the LinkedIn networking group, and insights from our engineers. If you’re on Twitter, please send us a message and we’ll be sure to follow you back!

Monday, June 29, 2009

Energy Efficiency Remains Priority In Spite of Economic Troubles

In lean times, data centers are learning to do more with less. The Aperture Research Institute of Emerson Network Power just released the results of a study showing that, despite the global economic downturn, energy-efficiency is still a top-of-mind objective for many data centers. In fact, data center managers are concentrating on resolving efficiency issues as a way to balance increasing demand for IT services with stagnant budgets.

The report reveals that:

Data center managers will look at ways to squeeze more from their existing resources, with 80 percent of those surveyed saying they can create at least 10 percent additional capacity through better management of existing assets. Thirty percent of those surveyed said they could find an additional 20 percent. There is likely to be a revitalized focus on tools that provide insight into resource allocation and use.

Data centers will also look to green initiatives to help manage their operating expenses, with 87 percent of those surveyed having a green initiative in place and the majority expecting to continue or intensify these efforts.

The survey data also suggests that the downturn will have "little effect on the demand for IT services" – a positive indicator for economic recovery. I recommend downloading the full Research Note as a PDF at the Aperture Research Institute’s website. It’s an interesting read.

Wednesday, June 24, 2009

Investing in Energy-Efficient Equipment

In "Taking Control of the Power Bill", Bruce Gain takes a look at how many data center admins are retooling their IT infrastructures’ power needs to accommodate growth and slash costs. He notes that although many admins struggle with having to pay additional costs associated with switching to more eco-efficient server room cooling, airflow designs, and other related equipment, paying for more expensive yet efficient equipment is a smart investment when you look at the big picture.

In order to justify that investment, admins should calculate the ROI offered by different scenarios. By creating models to outline the costs of ownership for different configurations and doing a full costs-benefits analysis, you can ease the decision making process. Once you begin making the switch to a more energy-efficient approach, it’s recommended that your organization phase in new equipment as part of the natural growth and evolution of your IT systems.

Michael Petrino, vice president of PTS, also offers his thoughts on the subject, providing a concrete example of cheaper yet less efficient components vs. more power-efficient but costly alternatives. I encourage you to check out the full article in Vol.31, Issue 17 of PROCESSOR.

Tuesday, June 09, 2009

Drug Companies Put Cloud Computing to the Test

Traditionally characterized as "late adopters" when it comes to their use of information technology (IT), major pharmaceutical companies are now setting their sights on cloud computing.

Rick Mullin at Chemical & Engineering News (C&EN) explores how Pfizer, Eli Lilly & Co., Johnson & Johnson, Genentech and other big drug firms are now starting to push data storage and processing onto the Internet to be managed for them by companies such as Amazon, Google, and Microsoft on computers in undisclosed locations. In the cover story, “The New Computing Pioneers”, Mullin explains:

“The advantages of cloud computing to drug companies include storage of large amounts of data as well as lower cost, faster processing of those data. Users are able to employ almost any type of Web-based computing application. Researchers at the Biotechnology & Bioengineering Center at the Medical College of Wisconsin, for example, recently published a paper on the viability of using Amazon's cloud-computing service for low-cost, scalable proteomics data processing in the Journal of Proteome Research (DOI: 10.1021/pr800970z).”

While the savings in terms of cost and time are significant (particularly in terms of accelerated research), this is still new territory. Data security and a lack of standards for distributed storage and processing are issues when you consider the amount of sensitive data that the pharmaceutical sector must manage. Drug makers are left to decide whether it’s smarter to build the necessary infrastructure in-house or to shift their increasing computing burdens to the cloud.

Friday, June 05, 2009

Data Center Professionals Network

The other day I stumbled across the Data Center Professionals Network, a free online community for professionals from around the world who represent a cross section of the industry. Members include data center executives, engineering specialists, equipment suppliers, training companies, real-estate and building companies, colocation and wholesale businesses, and industry analysts. The recently launched networking site enables key players in the industry to easily connect, interact, and develop business opportunities.

According to Maike Mehlert, Director of Communications:

The Data Center Professionals Network has been set up to be a facilitator for doing business. It acts as a one-stop-shop for all aspects of the data center industry, from large corporations looking for co-location or real estate, or data centers looking for equipment suppliers or services, to engineers looking for advice or training.

Features of the social network include a personalized user profile, as well as access to job boards, business directories, press releases, classified ads, white papers, photos, videos and events.

I haven’t had a chance to join yet but if you want to check it out, visit (you can sign in using a Ning ID if you already have one). If you do visit the site, post a comment and let me know what you think.

Wednesday, May 20, 2009

How Big Should Large Screen Displays Be In Your Command & Control Room?

Many A/V planners are challenged with how big large screen displays should be in their command & control rooms. There are actually some fairly complicated calculations that can be done which will help you determine what the minimum character size (sometimes referred to as 'x' size) should be under a given circumstance. This 'x-size' is defined as being the height of the smallest coherent element within the presented material. Think of this in terms of a lower-case letter x.

This lower case 'x' - which really is the same height as the smallest of the lower case letters - should subtend not less than 10 arc minutes on a viewers retina to be recognized at any viewing distance. This becomes more complicated when viewers are located off axis to the center of the screen as this requires a larger subtended angle and there is some affect as a result of colored symbols, amount of time the image is on screen, etc. As you can imagine... if you were sizing a screen for projection of a spreadsheet in order to go order to review your Data Center metrics you might want to use these calculations (which can be found in this ICIA publication)

Or a good free presentation on this subject can be found at:

Tuesday, May 05, 2009

Should New York Stock Exchange be hiding the location of its new Data Center?

I find it interesting that major financial institutions & government agencies attempt to hide the locations of their Data Centers. How effective can this non-disclosure aspect of security really be in today's media frenzy world? Obviously not too effective since NYSE's new Data Center build is already being talked about in Data Center Knowledge & The Bergen Record.

Even if details of this site location go unpublished, word from employees & vendors who support this site certainly will spread. I'm not saying that we should broadcast in neon lights the location of this Data Center, but if a new Data Center is constructed covering all 4 disciplines of security {Physical, Operational, Logical & Structural} POLS, will it matter if the public knows where the Data Center is if the security is thoroughly covered. It isn’t likely that the NYSE can really hide the whereabouts of its ~400K square foot Data Center anyway. Most Data Center designers cover Physical & Logical security systems thoroughly as those disciplines are maturing. What is often not covered thoroughly is the Structural Security, organizations become too focused on getting a CO and getting the new Data Center live that they often don’t cover themselves from the structural threats of fire, water, theft & wind.

How many Data Centers are built with a 20 minute fire rated door? How many Data Centers are built with more than a 10-15 minute Class 125 rating? The real interesting aspect to this point is that there are new building materials that can cover Structural Security and omit these unnecessary exposures while actually constructing the facility & obtaining the CO faster.

Thursday, April 30, 2009

Free Data Center Assessment Tools from the Department of Energy.

It certainly shows where we are at in this country when the Government is creating free tools to help us access our efficiency and giving us guidance on how to improve our Data Center Efficiency. What choice does the DoE have with the rising demand for power from our Data Centers expected to be 10% of the total US demand for power by 2011 while we have a growing need to reduce our carbon footprint & demand on fossil fuels.

In my opinion, a couple areas of caution are warranted in the use of these free tools. First the tool is free, but you still have to have the means to collect the data to enter into the tool, details about the power consumption of your equipment & whether the equipment can be controlled, utility bills, temperature readings at rack inlet & on supply return, airflow readings, etc. The presentation & guidance suggests that we can use air side & water side economizers, decrease our airflow, raise our water temperature & set points for supply side air without even discussing the impacts this could have on availability? The guidance for use of the tools discusses the use of thermography or CFD, but treats it as a suggested option in our analysis of improving DCiE while we are raising temperatures & decreasing airflow. These tools do present value & they are free. I just wish our Government would have stressed the tools limitations & cautioned users on other considerations that must be factored, such as the availability requirements of your Data Center.

Wednesday, April 15, 2009

How important is it to consider the Grid for my back-up data center & DR Plan?

It has been several years since the August 2003 Blackout, but I can't help thinking that we are all being lulled to sleep on the next major grid issue. There are only 3 main power grids in the US so if I have my primary Data Center on the Eastern Interconnect then should my DR requirement be to locate my back-up site in TX on the ERCOT Grid or in the west on the WSCC Grid. Or is there any benefit to locating on a different NERC region in which case there are 10 regions in the US. Can that benefit equivalent to being on a separate grid? I would doubt it since the 2003 Blackout crossed multiple NERC regions in the Eastern Grid.

Should I not be concerned with this & just choose a site or build a site with a higher level of redundant & back-up power? Is it more important to have the DR site in a location easily accessible for our technical experts than it is to have it on a different grid? Remember 911 grounded flights so if we had another event of that magnitude it would take days for my technical experts to get to our DR site if they could at all. Of course we can introduce many tools for full remote control & power control where access to our physical environment becomes less important so should I make it best practice to get that DR site on a separate grid? If I put locating my DR site into my key design criteria where should it fall on my priority list?

Monday, April 13, 2009

Going Green with Data Center Storage

Just saw an interesting article in Enterprise Strategies about the use of magnetic tape as an energy-efficient storage solution. In “Tape’s Role in the Green Data Center,” Mark Ferelli discusses how tape technology is making a comeback by helping to keep the data center green as utility bills rise. He explains:

The efficient use of disk can help with data center greening when a user reads and writes to the densest possible disk array to ensure capacity is maximized and more disk is not bought unnecessarily.

In archiving, on the other hand, the greenest option is tape, which uses less power and produces a lower heat output. This not only eases the bite of the utility bill but places less strain on HVAC systems. In contrast, the case can be made that using disk for archiving does more harm since disks that spin constantly use much more power and generate more heat.

Ferelli also takes a look at alternative power and cooling solutions, such as MAID (Massive Array of Idle Disks) storage arrays, in comparison with tape-based storage.

What’s been your experience with energy-efficient storage technology? Do tape-based systems offer better power savings versus disk-based solutions?

Friday, April 03, 2009

Google Unveils Server with Built-in Battery Design

For the first time on Wednesday, Google opened up about the innovative design of its custom-built servers.

The timing of the reveal, which coincided with April Fool’s Day, left some wondering if the earth shattering news was a prank. If it sounds too good to be true, it probably is, right? Not so in this case. In the interest of furthering energy efficiency in the industry, Google divulged that each of its servers has a built-in battery design. This means that, rather than relying on uninterruptible power supplies (UPS) for backup power, each of Google's servers has its own 12-volt battery. The server-mounted batteries have proven to be cheaper than conventional UPS and provide greater efficiency.

Google offered additional insights into its server architecture, its advancements in the area of energy efficiency, and the company’s use of modular data centers. For the full details, I recommend reading Stephen Shankland’s coverage of the event at CNET News. It’s fascinating stuff. Plus, Google plans to launch a site in a few days with more info.

Thursday, April 02, 2009

Can The Container Approach Fit Your Data Center Plans?

Conventional Data Center Facilities have now had a long history of difficulties in keeping up with the increasing demands of new server & network hardware so organizations are now looking for solutions that upgrade the facility with the technology upgrade, rather than continuing to invest millions in engineering & construction upgrades to support higher densities, the expense of having to build or move to new facilities that can handle these densities. Containers offer a repeatable standard building block. Technology has long advanced faster than facilities architecture and containerized solutions at least levels a large portion of the facility advance to the technology advance.

So why haven't we all already moved into Containerized Data Center Facilities and why are so many new facilities underway that have no plans for containers? Hold on Google just revealed for the first time that since 2005, its data centers have been composed of standard shipping containers--each with 1,160 servers and a power consumption that can reach 250 kilowatts. 1st Google showed us all how to better use the internet, now have they shown us all how to build an efficient server & Data Center? The container reduces the real estate cost substantially, but the kW cost only marginally, Google really focused its attention on efficiency savings at the server level, bravo! The weak link in every data center project will always remain the ability of the site to provide adequate redundant capacity emergency power & heat rejection. These issues do not go away in the container ideology. In fact, it could be argued that the net project cost in the container model could be greater since the UPS's & CRAC units often are located within the container, which will cause the overall count of them to be greater. Just as in any Data Center project rightsizing the utility power, support infrastructure & back up power to meet the short & long term goals of your key design criteria is the most important aspect to consider in any containerized project. What containers do accomplish is creating a repeatable standard & footprint for the IT load and how the power, air & communications are distributed to it. Organizations are spending billions of dollars planning & engineering those aspects in many cases to find out their solution is dated by the time they install their IT load. With containers when you upgrade your servers you are upgrading your power, air & communications simultaneously & keeping it aligned with your IT load.

What about the small & medium business market? Yes the containerized approach is a very viable alternative to a 100,000+ square foot conventional build, but what about the smaller applications? A container provides an all encompassing building block for technology & facility architecture but in a fairly large footprint. Not everyone has a need for 1400U's of space, 22,400 processing cores or the wherewithal to invest over $500K per modular component. Unless SMB's want to colocate or sign off to a managed service provider who is running their IT in a cloud in a new containerized Data Center, the container approach doesn't have a play for SMB or does it? There are certainly solutions in the market to help a SMB build their own smaller footprint high density movable enclosure or mini-container, it’s surprising there has been little focus on that much larger market. We are exploring some containerized approaches to the SMB market that would also address branch & division applications for large organizations where the container offerings today likely present too large a building block to be practical.

For more information about Containerized Data Centers & some of the methodologies for deployment I recommend Dennis Cronin's article in Mission Critical Magazine.

And certainly the details on CNET about Google's Containers & Servers.

Wednesday, April 01, 2009

Data Center Power Drain [Video]

Click here to watch a recent news report from on what's being done to make San Francisco's data centers more energy efficient.

In the "On the Greenbeat" segment, reporter Jeffrey Schaub talks to Mark Breamfitt at PG&E and Miles Kelley at 365 Main about how utilities companies and the IT industry are working to reduce overall energy consumption. According to the report, each of 365 Main’s three Bay Area data centers uses as much power as a 150 story skyscraper, with 40 percent of that power used to cool the computers.

Wednesday, March 25, 2009

Spending on Data Centers to Increase in Coming Year

An independent survey of the U.S. data center industry commissioned by Digital Realty Trust indicates that spending on data centers will increase throughout 2009 and 2010.

Based on Web-based surveys of 300 IT decision makers at large corporations in North America, the study reveals that more than 80% of the surveyed companies are planning data center expansions in the next one to two years, with more than half of those companies planning to expand in two or more locations.

In addition, the surveyed companies plan to increase data center spending by an average of nearly 7% in the coming year. “This is a reflection of how companies view their datacenters as critical assets for increasing productivity while reducing costs," noted Chris Crosby, Senior Vice President of Digital Realty Trust.

To view the rest of the study findings, visit the Investor Relations section of

Thursday, March 19, 2009

Top 3 Data Center Trends for 2009

Enterprise Systems just published the “Top Three Data Center Trends for 2009” by Duncan Campbell, vice president of worldwide marketing for adaptive infrastructure at HP. In the article, Campbell discusses how companies need to get the most out of their technology assets and, in the coming year, data centers will be pressured to "maintain high levels of efficiency while managing costs". In addition, companies will need to make an up-front investment in their data center assets in order to meet complex business demands.

Campbell predicts:
  • “There will be no shortage of cost-cutting initiatives for enterprise technology this year.”
  • “As virtualization continues to enable technology organizations to bring new levels of efficiency to the data center, the line between clients, servers, networks and storage devices will continue to blur.”
  • “Blade offerings will continue to mature in 2009. Server, storage, and networking blades will continue to improve their energy efficiency and reduce data center footprints. Vendors are also now developing specialty blades, finely tuned to run a specific application.”

Efficiency, agility, and scalability will remain priorities for companies. By taking advantage of innovative data center technologies, companies can further reduce costs while increasing productivity – a goal that is of particular importance during challenging economic times.

Wednesday, March 11, 2009

It’s Nap Time for Data Centers

Yesterday at the International Conference on Architectural Support for Programming Languages and Operating Systems in Washington, D.C., researchers from the University of Michigan presented a paper, titled “PowerNap: Eliminating Server Idle Power”.

“One of the largest sources of energy-inefficiency is the substantial energy used by idle equipment that is powered on, but not performing useful work,” says Thomas Wenisch, assistant professor in the department of Electrical Engineering and Computer Science. In response to this problem, Wenisch's team has developed a technique to eliminate server idle-power waste.

Their paper addresses the energy efficiency of data center computer systems and outlines a plan for cutting data center energy consumption by as much as 75 percent. This would be accomplished through the concurrent use of PowerNap and the Redundant Array for Inexpensive Load Sharing (RAILS). PowerNap is an energy-conservation approach which would enable the entire system to transition rapidly between a high-performance active state and a near zero-power idle state in response to instantaneous load, essentially putting them to sleep as you would do with an ordinary laptop. RAILS is a power provisioning approach that provides high conversion efficiency across the entire range of PowerNap’s power demands.

The paper concludes:

PowerNap yields a striking reduction in average power relative to Blade of nearly 70% for Web 2.0 servers. Improving the power system with RAILS shaves another 26%. Our total power cost estimates demonstrate the true value of PowerNap with RAILS: our solution provides power cost reductions of nearly 80% for Web 2.0 servers and 70% for Enterprise IT.

To read the full text, please visit Wenisch’s site to download a PDF of the paper:

Monday, March 09, 2009

Finding the Silver Lining During an Economic Downturn

It seems, no matter which way you look these days, there’s more bad news. Job losses are up. The stock market is down. But not every business is focusing on the negative. In fact, there’s even a growing list of companies refusing to take part in the recession. As Jamie Turner at the 60 Second Marketer writes:

To be sure, times are tough. They’re downright B-A-D. But the world isn’t ending. The sky is not falling. In fact, you and your business will be here tomorrow and the next day — if you stop focusing on the negative and start focusing on the positive.

In light of this, I’d like to highlight one company who sees data center opportunity despite the poor economy: Juniper Networks. According to this article in Network World, Juniper has “launched an aggressive campaign to expand its enterprise business with a targeted assault on the data center.” They’ve announced a project, called Stratus, which their blog describes as an attempt to “create a single data center fabric with the flexibility and performance to scale to super data centers, while continuing to drive down the cost and complexity of managing the data center information infrastructure.”

And why announce Stratus now? Tom Nolle, president of consultancy CIMI Corp, explains: “Juniper cannot hope to match Cisco in breadth so it is making that an asset instead of a liability. Juniper is timing its success with Stratus to the economy's recovery and to developing symbioses with partners.”

That’s the kind of strategic, fighting spirit that helps a company come out on top, wouldn’t you say?

Friday, February 20, 2009

Improving Mobile Applications in the Enterprise

Look for Michael Petrino, vice president of PTS Data Center Solutions, in the latest issue of PROCESSOR (Vol.31, Issue 8).

In "Essential Mobile Tools: Maximize Your Mobile Toolset to Better Unlock Wireless’ Potential", Petrino shares his thoughts on the importance of establishing the right power infrastructure in order to improve the broadcast range of on-campus wireless connections.

The article discusses several easy-to-implement ways that enterprises can make better use of mobile applications so that they can support mobile employees without placing an unnecessary burden on the data center or IT support teams. It features insights from Robert Enderle, an analyst for the Enderle Group, and Joel Young, CTO and senior vice president of R&D at Digi International.

To read the full article, please visit

Tuesday, January 27, 2009

Acquisition of NTA’s Technology Consulting Assets

I’m pleased to announce that PTS has officially acquired critical components of Nassoura Technology Associates, LLC (NTA) including all of its technology consulting assets. If you are not already familiar with NTA, they were a leading technology consulting and engineering firm based in Warren, New Jersey who in-house developed the widely acclaimed software product, dcTrack3.0. Recently, Raritan, Inc. purchased NTA’s dcTrack3.0 product in a separate transaction.

NTA’s assets will enable us to expand our existing technology consulting service offerings including network, structured cabling, security, and audio/visual design. Furthermore, this acquisition enables us to enhance our existing library of technical drawings, specifications, and request for proposal (RFP) documentation. Also included in the acquisition was the transfer of documents for all NTA’s completed client projects across a broad spectrum of industries.

If you are a previous client of NTA, we will continue to maintain your design documents and provide you with the expert level of service you had become accustomed to as an NTA client. We are extremely excited to expand our customer base and to have this opportunity to improve our client deliverables by acquiring the assets of one of the most influential design firms serving the data center industry.

In addition to the acquisition of NTA’s technology consulting assets, we are also pleased to announce the addition of six (6) new employees to our growing family of data center experts. We are sure they will contribute substantially to PTS’ continued growth in 2009. The new employees include data center solutions professionals, Andrew Graham, Peter Graham, and Michael Piazza as well as architect, Michael Relton, and senior electrical engineer, Alex Polsky, P.E.

The latest new employee is data center software development and pioneer, Dave Cole. Dave has a storied history of developing software and hardware products for System Enhancement Corporation, later purchased by APC, and Hewlett-Packard. Most notably however, Dave founded and then sold his company, The Advantage Group, along with his industry leading data center support infrastructure device monitoring product to Aperture, later purchased by Emerson. Stay on the lookout for further announcements as to what Dave and I are up to.

Monday, January 19, 2009

Data Centers Understaffed and Underutilized?

The following news snippet from caught my eye and I couldn’t resist sharing it here:

Symantec Corp.'s State of the Data Center 2008 report paints a picture of understaffed data centers and underutilized storage systems.

The report, based on a survey of 1,600 enterprise data center managers and executives, found storage utilization at 50%. The survey also discovered that staffing remains a crucial issue, with 36% of respondents saying their firms are understaffed. Only 4% say they are overstaffed. Furthermore, 43% state that finding qualified applicants is a problem.

Really interesting numbers, particularly when it comes to staffing issues. With so many layoffs and other cutbacks happening, it’s not so surprising that firms feel understaffed. However, with the national unemployment rate reaching 7.2 percent for December, I don’t think finding qualified applicants will be as much of a problem in 2009. As for the underutilization of storage systems, this is a major contributor to high data center costs. If corporate budgets continue to get slashed, I can guarantee that virtualization is going to stay right at the top of most data center managers to-do lists for the foreseeable future.

(By the way, if you’re an unemployed techie, you might want to check out this article from Socialtext is offering its social networking tools free to laid-off workers who want to form alumni networks and share job leads.)