Tuesday, December 23, 2008

Data Center Energy Efficiency in 2009

In my last post, I talked about how it will be more important than ever for data centers to increase their operating efficiencies in the coming year. But, as I’m sure you know, this isn’t a new issue. Boosting energy efficiency in data centers has been a major concern for the past few years, in both the public and private sectors. Doing so will help to produce large energy savings, enhance data center reliability, and cut carbon emissions by reducing the load on the electric grid.

To reach these goals, equipment suppliers are introducing more energy efficient technologies, data center operators are stepping up efforts to reduce energy consumption in their buildings, and the U.S. Department of Energy (DOE) and the U.S. Environmental Protection Agency (EPA) have moved to address the issue by initiating a joint national data center energy efficiency information program. PTS helps our clients reach their efficiency goals through our power and cooling systems analysis, as well as via our CFD modeling services.

Now, with the recession in full swing, the financial impact of high data center energy consumption is becoming an even more pressing issue for corporations. In light of this, I'd like to share an article I came across in the latest issue of Wall Street & Technology, titled “5 Tips to Cut Data Center Energy Use”. It talks about how Gartner, a leading IT research and consulting firm, has outlined 11 best practices for cooling that can help dramatically improve data center energy efficiency.

While you have to buy the full Gartner report to get all 11 practices, the WS&T article gives you the top 5 for free. Here’s a quick rundown:

1. Plug Holes in the Raised Floor.
2. Install Blanking Panels.
3. Coordinate CRAC Units.
4. Improve Underfloor Airflow.
5. Implement Hot Aisles and Cold Aisles.

For more info on data center energy efficiency, I invite you to download our newest white paper, titled “Power Moves: Understanding what you know - and don't know - about power usage in your data center”.

Until next time, happy holidays and best wishes for a prosperous new year!

Thursday, December 04, 2008

Get the Most Out of Your Data Center in 2009

With the economy in turmoil and fears of recession keeping corporate budgets tight, it’s important that organizations get the most bang-for-the-buck with their IT resources. With that in mind, I’d like to recommend another article that looks to the coming year with a proactive mindset.

Utility Automation & Engineering T&D and Electric Light & Power online recently published the “Top 10 Ways to Get More from Your Data Center in 2009”, as outlined by Chuck Spears of Emerson Network Power. The suggestions include:

1. Cover your bases.
2. Look inside before outside.
3. Assess before action.
4. Go from room to rack.
5. Cap the cold aisle.
6. Check the weather forecast.
7. Watch often, if not always.
8. Improve energy utilization.
9. Avoid cutting corners.
10. Don’t stop thinking about tomorrow.

While the article acknowledges that “[t]he coming year will undoubtedly require data center and IT managers to get maximum value from their facility without making significant enhancements”, it urges data center managers to bear in mind that “numerous opportunities exist throughout the data center to do more with less.”

I like to frame it in the following terms: “Sometimes adversity is what you need to face in order to become successful.” Lean times can help trim the fat from your operations and can encourage your business to make the most of what it has. In doing so, your organization may emerge stronger than ever before.

Tuesday, November 18, 2008

Top 10 IT Trends for 2009

Baseline, a magazine for technology leaders and business executives, just published a great slideshow on the IT trends to watch in 2009.

Knowledge of the top industry trends can make your company more agile when it comes to implementing information technology and “can give your company the advantage it needs to do business in this challenging economic environment.”

The highlighted trends include:
  • Software as a service (cloud computing),
  • Continued virtualization of data center technologies, and
  • The move toward energy-efficient data centers.
(They’ve also posted a slideshow on the 50 most influential people in business IT. Are you one of them?)

Wednesday, October 29, 2008

PTS to Provide Services for Verari’s FOREST Container Data Center

I’m happy to announce that PTS Data Center Solutions has partnered with Verari Systems to provide design and construction services for Verari’s FOREST Container data center consolidation solution. The combination of Verari’s modular, portable data center that is deployable virtually anywhere, with the broad project expertise of PTS, provides organizations with a complete and secure “ready to go” alternative to traditional brick-and-mortar data centers.

Unlike the traditional bid-build process which costs more, takes more time to execute and requires more resources to manage, the Verari/PTS FOREST Container construction strategy condenses the exercise into a clear and concise process. The combination of a portable data center with our turnkey services addresses consolidation needs with minimal disruption to business operations.

Recently named as a finalist in the “Clean Technology Category” of the 15th Annual AeA San Diego Council's High Tech Awards, the FOREST container is designed to house over 2000 blade-based compute servers or nearly 12 petabytes of blade-based storage by utilizing Verari’s BladeRack® 2 X-Series platforms in a modular unit. Energy spending is dramatically reduced by the Verari FOREST Container’s ultra efficient power subsystems and patented Vertical Cooling Technology™, boosting reliability, performance and availability.

Click here to learn more.

Tuesday, October 21, 2008

Data Center Decisions Conference in Chicago

This Thursday, October 23rd, I’ll be a featured speaker at the 2008 Data Center Decisions Conference in Chicago, IL. This year’s conference will focus on four major topics: data center design, systems management, virtualization, and disaster recovery.

While the pace of data center and facility design improvements has continually lagged that of IT systems demand, in the last two years vendors have been rolling out various tools to help engineers design appropriate power and cooling infrastructure as well as help data center managers plan for IT capacity growth. My presentation delves into the capacity planning and modeling tools available for data center design and management.

In addition to providing an overview of the data center facility design life-cycle, I’ll discuss the tools available, review their functionalities, and advise on which data center products are worth using. This will cover new capacity planning tools from APC, Emerson/Aperture, and Rackwise as well as the role of CFD modeling software and services. I’ll also be available for questions during the daily exhibit hall receptions "Ask the Expert" segment.

For complete conference information and to register for free admission, visit the Data Center Decisions website at http://datacenterdecisions.techtarget.com/. I hope to see you there!

Tuesday, September 02, 2008

Ongoing Maintenance & Monitoring

It is an obvious truism that given enough time everything will fail. The only tool we have at our disposal to hopefully delay this eventuality is maintenance service. Unfortunately, it’s another truism that for most small computer room operations, this vital step is not performed. As is unfortunately typical, we often put off short-term inconvenience for future unplanned and unpredictable grief.

Whatever the circumstance, there are plenty of tools to simplify the organization, planning, scheduling, and performance of field preventative maintenance. Real-time monitoring systems can serve as the front line of defense against unplanned outages.

PTS emphasizes utilizing IP and Web technologies to oversee and control critical support systems from just about anywhere. For power monitoring, we prefer to take advantage of the growing trend in the power strip manufacturing industry by having clients deploy power strips that can measure at the receptacle, therefore at the device level. For example, simple alarming of support infrastructure as critical load values approach predetermined thresholds will prevent against failures due to overload conditions and therefore curtail availability stripping outages.

In any case, monitoring systems for IT attributes and physical attributes should provide proactive management and enable the quick assessment of your present situation and notify the appropriate personnel should situations that threaten availability.

Is preventative maintenance high enough on your data center to-do list? What technologies do you rely on to monitor your critical systems?

Monday, August 11, 2008

Receptacle Level Load Monitoring & Control

Power monitoring and control at the receptacle or rack level is a hot topic lately. Part of the interest can be attributed to the lure of the unknown – that feeling of “I’m not sure why I want it, but I’ll probably need it!” But there are some really solid reasons for data center managers to consider receptacle-level power monitoring/control solutions.

The ability to trace watts information at the power strip level gives a much clearer picture of how much power a data center consumes. If I have an under-performing asset, it’s easy to earmark for replacement if the problem can be measured down to the receptacle level. If an asset is under-utilized, it can be easily targeted for virtualization.

There are a number of products that can be used for receptacle-level power monitoring and management. Take, for instance, the RPC series of power management solutions from Baytech. These units let you manage power more efficiently by remotely turning on/off receptacles or rebooting unresponsive equipment. (You can read more about Baytech’s products in “Better Monitor & Control Power” at Processor.com.)

Raritan offers Remote Power Control (RPC) units that allow you to control power usage at the socket level. The units have individual LED indicators for each receptacle and, in the case of an outage, offers receptacle status retention so that power is restored only to those assets that were on previously.

There are also the Synaptix™ power distribution units from Epicenter. These products come in a variety of receptacle configurations, offer the ability to measure consumption at each individual receptacle, and can be accessed remotely.

It will be interesting to measure the true impact of these units on data center power efficiency. Don’t be surprised to find me writing a white paper on the use of receptacle-level power solutions in the coming months.

Monday, August 04, 2008

New White Paper on Power Usage

Managing data center power usage is critical due to rising energy costs and diminishing supplies. But where do you start?

To help answer this question, PTS Data Center Solutions in collaboration with Raritan, a leading manufacturer of power management products, developed a new white paper that examines the myths and realities of power usage in the data center.

Entitled “Power Moves: Understanding what you know - and don't know - about power usage in your data center”, the white paper shows IT professionals how to calculate data center power efficiency and set standards to align with the Green Grid. Its findings are based on a series of tests which examined the effects of heat, airflow and power usage in a working server environment using 3-D CFD software and intelligent power distribution units (iPDUs), among other devices.

The hypothesis was that, by knowing more about their real-time operational environment, data center managers would be empowered to manage smarter. Key findings include:
  • Servers are not necessarily needed to add computational power;
  • OpEx expenses can be reduced without putting required computing at risk; and
  • Running servers at 80 to 100 percent can be more beneficial than running at the industry average of 60-80 percent.

To download the free white paper, please visit the PTS Media Library.

Thursday, July 17, 2008

Tips for Handling Data Center Moves and Shortages of Space

Look for PTS Data Center Solutions in the July 11th issue of Processor magazine (Vol.30, Issue 28).

Kurt Marko interviewed me for the feature article, “Need More Data Center Space?: IT Managers Are Faced With Options Ranging From Simple Housekeeping To Major Construction”. Adding data center space can be a complex and costly issue. If your data center runs out of room, the basic options are 1) reorganize and consolidate to get the most out of your existing space, 2) upgrade your technology to increase density, 3) call in a contractor to renovate and expand your current facility, 4) add on a data center in a box, or 5) build a bigger-better data center. Marko’s article discusses your options and gives a rundown of the pros/cons of each.

Michael Petrino, vice president at PTS Data Center Solutions, also appears in this issue of Processor. In Bruce Gain’s article, “Data Center Moving Day: There Is No Such Thing As ‘Over Planning’”, Michael shares his thoughts on how to prepare for a data center relocation project. Topics covered include the overall planning process, what to look for when hiring professional movers, the costs of up-time and down-time, transport options, and other complications.

Click on the links above to read the articles, or view the entire issue as a PDF.

Tuesday, July 01, 2008

Data Center Energy Summit 2008

On June 26th, the Silicon Valley Leadership Group (SVLG) held its first Data Center Energy Summit in Santa Clara, CA. The industry event focused on issues involving data center sustainability, energy efficiency and green computing.

In conjunction with Accenture and the Lawrence Berkeley National Laboratory (LBNL), the SVLG also unveiled a report containing real world case studies from its Energy Efficient Data Center Demonstration Project. You can download the report here: http://accenture.com/SVLGreport. Put together in response to the Environmental Protection Agency (EPA)’s report to Congress on data center energy efficiency, the report examines a number of innovative energy-saving initiatives.

Ken Oestreich from Cassatt points out in his blog that the bulk of the projects focused on improving infrastructure. He raises the following point:

My take is that the industry is addressing the things it knows and feels comfortable with: wires, pipes, ducts, water, freon, etc. Indeed, these are the ‘low-hanging fruit’ of opportunities to reduce data center power. But why aren't IT equipment vendors addressing the other side of the problem: Compute equipment and how it's operated?


I agree with Oestreich that methods for reducing the energy consumption of IT equipment definitely need to be explored further, but I think this report is a great step forward for the industry in terms of validating the EPA’s research and providing actionable data. I’m sure we’ll see more regarding IT equipment operations in future research.

As a side note, Data Center Knowledge has set up a calendar to help data center professionals keep track of upcoming industry events. Check it out: DataCenterConferences.com.

Thursday, June 12, 2008

PTS Data Center Solutions Turns 10-Years Old!

Time flies like the wind. Fruit flies like bananas.
-- Groucho Marx

All kidding aside, time really does fly! It's hard for me to believe, but it was a decade ago that we founded PTS Data Center Solutions (known way-back-when as Power Technology Sales, Inc.). Our goal then, as it is now, was to provide our clients with unparalleled service and optimal solutions to meet their data center and computer room needs.

As we celebrate the company’s tenth anniversary, I’d like to express my appreciation to our hardworking team of consultants, engineers, designers, field service technicians, IT personnel and business staff, as well as our families, friends, business colleagues and clients for being part of our success.

While founded and headquartered in Franklin Lakes, New Jersey, our firm has experienced significant growth over the years, starting with the opening of our West Coast office in Orange County, California in 2004. Just a few years later, PTS Data Center Solutions completed the expansion and reorganization of our NJ facilities – an accomplishment that doubled the amount of useable office and warehouse space available to our team. We also upgraded our computer room, which hosts PTS' live environment and operates as a demonstration center for potential clients to see our work first-hand.

Over the course of the last decade, we’ve had the pleasure of working with small and medium-sized companies as well as large enterprise organizations across a broad spectrum of industry verticals. We’ve grown to become a multi-faceted turnkey solutions provider, offering services for consulting, engineering, design, maintenance, construction, monitoring and more. One of the more recent additions to our business offerings is our Computational Fluid Dynamic (CFD) Services, which use powerful 3-D CFD software for the design, operational analysis, and maintenance for data center and computer rooms of all types and sizes.

Our online presence has also grown. We’ve expanded our corporate website several times to provide new resources for our visitors. To help provide our clients and other IT professionals with insights on common data center issues, we began blogging in 2006. (I’d like to thank all of our readers for your comments and ongoing support!) Just a few months ago, we launched our own Facebook Page to help you stay up-to-date with the latest blog posts, our speaking engagements and other upcoming events. And, coming soon, look for me to be a guest blogger for the “World’s Worst Data Centers” contest, sponsored by TechTarget and APC.

This really is an exciting time for everyone at PTS Data Center Solutions. Reaching this milestone is a great achievement for our company and we’re looking forward to what the next ten years have to offer. Here's to the decades ahead!

Friday, May 23, 2008

Article: “Changing The Oil In Your Data Center”

This is just a quick update before everyone heads out for the holiday weekend.

If you haven't already done so, I encourage you to check out the May 16, 2008 issue of Processor magazine (Vol.30, Issue 20). Drew Robb interviewed me for his latest article, entitled “Changing The Oil In Your Data Center.”

Maintenance neglect is an all-too-frequent cause of unplanned data center downtime. This people-related problem stems from improper documentation of the maintenance process, failure to adhere to a set maintenance schedule, and the overlooking of critical systems. In the article, Robb talks with me about the value of implementing a scheduled maintenance plan to ensure reliable data center operations.

He also includes insights from several other data center professionals, including Steven Harris, director of data center planning at Forsythe Solutions Group (www.forsythe.com), and James Rankin, a CDW technology specialist (www.cdw.com). To read the full article, please visit the Processor website at http://www.processor.com/).

Have a safe and happy Memorial Day weekend!

Wednesday, May 07, 2008

The National Data Center Energy Efficiency Information Program

Matt Stansberry, editor of SearchDataCenter.com, posted a blog entry on the National Data Center Energy Efficiency Information Program. I'm echoing his post here because energy efficiency is such a critical issue for the industry.

The U.S. Department of Energy (DOE) and U.S. Environmental Protection Agency (EPA) have teamed up on a project with aims to help reduce energy consumption in data centers. In addition to providing information and resources which promote energy efficiency, the National Data Center Energy Efficiency Information Program is reaching out to data center operators and owners to collect data on total energy use.

In the words of the EPA’s Andrew Fanara:

We've put out an information request to anyone who has a data center to ask if you would measure the energy consumption of your data center in a standardized way and provide that to us. That will help us get a better handle on what's going on nationally in terms of data center energy consumption.


Hear the EPA’s Andrew Fanara talk about the program in this video from the Uptime Institute Symposium:



If you’d like to get your data center involved, more information can be found at the EPA's ENERGY STAR data center website and the DOE's Save Energy Now data center website.

Friday, April 25, 2008

CFD Modeling for Data Center Cooling

Computational fluid dynamics (CFD) modeling is a valuable tool for understanding the movement of air through a data center, particularly as air-cooling infrastructure grows more complex. By using CFD analysis to eliminate hot spots, companies can lower energy consumption and reduce data center cooling costs.
Mark Fontecchio at SearchDataCenter.com has written a great new article on the subject, entitled “Can CFD modeling save your data center?”. Fontecchio examines the use of CFD analysis as a tool for analyzing both internal and external data center airflow.
In the article, Carl Pappalardo, IT systems engineer for Northeast Utilities, provides a first-hand account of how CFD analysis helped in optimizing their data center’s cooling efficiency. Allan Warn, a data center manager at ABN AMRO bank, also shares his thoughts on the value of renting vs. buying CFD modeling software. Fontecchio also includes insights from industry experts, including Ernesto Ferrer, a CFD engineer at Hewlett-Packard Co., and from yours truly, Pete Sacco.
For more information on data center cooling, download my White Paper, “Data Center Cooling Best Practices”, at http://www.pts-media.com (PDF format). You can also download additional publications, like Vendor White Papers, from the PTS Media Library.
To learn more about how PTS uses CFD modeling in the data center design process, please visit: http://www.ptsdcs.com/cfdservices.asp.

Tuesday, April 15, 2008

Free White Paper on Relative Sensitivity Fire Detection Systems

Fire detection is a challenge in high-end, mission critical facilities with high-density cooling requirements. This is due primarily to the varying levels of effectiveness of competing detection systems in high-velocity airflow computer room environments.

In a new white paper, PTS Data Center Solutions’ engineers Suresh Soundararaj and David Admirand, P.E. identify and analyze the effectiveness of relative sensitivity-based fire detection systems in a computer room utilizing a high-density, high-velocity, and high-volume cooling system.

In addition to examining the differences between fixed sensitivity and relative sensitivity smoke detection methodologies, Soundararaj and Admirand detail the results of fire detection tests conducted in PTS’ operational computer room and demo center using AirSense Technology’s Stratos-Micra 25® aspirating smoke detector.

The illustrated 13-page white paper, entitled “Relative Sensitivity-based Fire Detection Systems used in High Density Computer Rooms with In-Row Air Conditioning Units,” is available for download on our website in PDF format.

Tuesday, March 25, 2008

Reflections on the DataCenterDynamics Conference

Earlier this month, I had the honor of speaking in two separate sessions at the DatacenterDynamics Conference & Expo in New York City.

My first presentation, "The Impact of Numerical Modeling Techniques on Computer Room Design and Operations," was well received by its 60 or so attendees. Based on audience feedback provided both during and after the presentation, I think people really appreciated the practical examples and case studies of lessons learned since PTS began utilizing 3-D computational fluid dynamic (CFD) software as a tool for designing cooling solutions.

My second stint, with co-presenter Herman Chan from Raritan Computer, Inc., was equally well received. Our presentation on "Stop Ignoring Rack PDUs" described the research both our companies have undertaken regarding rack-level, IT equipment, real-time power monitoring.

As part of our presentation we displayed the results of our power usage study of PTS’ computer room. The data revealed that 58% of the total power consumption of PTS’ computer room is consumed and dissipated as heat by the IT critical load. This proves to be far better than some other industry data. In the coming months, both Raritan and PTS hope to release a co-written white paper documenting the results of our study.

Overall, the DatacenterDynamics’ show was even better attended and better sponsored than it was in 2007. I estimate there were some 500-700 people in attendance for the event. If the trend holds true from last year, about 50% of them were data center operators. The balance of the attendance was made up of consultants, vendors, and others.

This show has become my favorite regional data center industry event because of its unique single day format and for the quality of the content provided by its featured speakers (and I’m not just saying that because I’m a presenter). Many shows of this type turn into a commercial for the vendors that pay good money to sponsor the event.

What sets DataCenterDynamics apart is that the event organizers demand that each presentation be consultative in nature. Additionally, they make every effort to review and comment on each presentation before the event. If you haven’t attended this key data center industry event yet, I hope you’ll get the chance to do so in the near future.

Have you attended DataCenterDynamics? What sessions did you find most valuable? Please leave a comment to share your experience.

Tuesday, February 26, 2008

Are the Uptime Institute's Data Center Rating Tiers Out of Date?

Let me start by saying I have the utmost respect for the Uptime Institute’s Pitt Turner, P.E., John Seader, P.E., and Ken Brill and the work they have done furthering the cause of providing some standards to an otherwise standard-less subject like data center design. However, as a data center designer I feel their definitive work, Tier Classifications Define Site Infrastructure Performance, has passed its prime.

The Institute’s systems have been in use since 1995, which is positively ancient in the world of IT.

In its latest revision, the Uptime Institute’s Tier Performance Standards morphed from a tool for IT and corporate decision makers to consider the differences between different data center investments into a case study for consulting services pushing for certification against their standard.

While the data their standard is based upon has been culled from real client experiences, the analysis of the data has been interpreted by only one expert company, ComputerSite Engineering which works in close collaboration with the Uptime Institute. Surely, the standard could be vastly improved with the outside opinion and influence of many of the, just as expert, data center design firms that exist.

Case in point, the Uptime Institute has repeatedly defended the notion that there is no such thing as a partial tier conforming site (Tier 1+, almost Tier III, etc.). They argue that the rating is definitive and to say such things is a misuse of the rating guide. While I understand the argument that a site is only as good as its weakest link, to say that a site incorporating most, but not all of the elements of the tier definition is mathematically and experientially wrong.

PTS’ actual experiences bear this out. Our clients that have all the elements of a Tier II site, except for the second generator, are clearly better than those with no UPS and/or air conditioning redundancy (Tier I). Therefore, if not for Tier I+, how do they suggest to account for the vast realization between the real availability of the two sites?

It is interesting that most data center consulting, design, and engineering companies nationwide utilize elements of the white paper as a communications bridge to the non-facility engineering community, but not as part of their design process. In fact, most have developed and utilize their own internal rating guides.

While I will continue to utilize their indisputable expertise as a part of my own interpretation in directing PTS’ clients with their data center investment decisions, I suggest that clients would be wise not put all of their eggs in the Institute’s basket at this point in time.

What is your outlook on the Uptime Institute’s Tier Performance Standards? Is the four-tier perspective outdated or is it still a meaningful industry standard?

Friday, February 15, 2008

Facebook user? Add yourself as a fan of our blog!

Do you have a Facebook account? If so, you can help spread the word about the Data Center Design blog by joining our newly created Facebook Page. Be among the first to hear about blog updates, speaking engagements and other upcoming events.

Click here to view the Facebook Page for PTS Data Center Solutions and add yourself as a Fan.

Wednesday, February 13, 2008

Are “free” computer room site assessment services worth the money you pay for them?

It has become commonplace for the myriad of IT and support infrastructure OEMs to offer free site assessment services in an effort to woo clients into purchasing their equipment.

While it was already difficult enough for small- to mid-size design consulting service providers to build credibility and brand-identity in the ultra-competitive world of computer room design, in the past few years these firms have seen some of their most valuable vendor partners become chief competitors.

This is not just a case of sour apples. The design services provided by most OEMs do their clients a disservice. Clients are usually only provided the part of the picture that suits the manufacturer and they are forced to fill in the blanks. Unfortunately, the blanks are often not identified. This leads to some very unhappy bean counters.

One leading power and cooling system manufacturer’s entire go-to-market strategy is based on allowing inexperienced enthusiasts to represent themselves as capable designers by providing them with access to an online configuration tool. Being an expert in its use myself, I can safely say the information it provides is rudimentary at best. Our team at PTS Data Center Solutions uses this tool only for ordering purposes and never for design. These online tools are being used by the manufacturer’s own systems engineers, reseller partners, or end users themselves to try to simplify the inherently complicated subject of computer room support infrastructure design.

The manufacturer’s configuration tool only provides solution recommendations for the equipment they manufacture. Much of the rest of the complete solution is missing, including the infrastructure they don’t sell, the labor to install any of it, and/or the engineering services to produce the design documentation required to file the necessary permits. Worse, little advice is provided as to the best project delivery methodology. While I would be the first to admit the traditional consulting engineering community has been slow to adapt to the latest design practices, the truth remains that as-a-matter-of-course changes to facilities still require the services of a licensed engineer. This includes the sizing of the power and cooling infrastructure.

That’s not to say the use of tools doesn’t have its place. Any consultant-recommended solutions should always be based on sound engineering using the latest technologies, such as computational fluid dynamic (CFD) modeling.

Individuals seeking computer room solutions are better served by hiring experienced, licensed, capable design engineers that are well versed in all of the major infrastructure solutions. This ensures that for a moderate amount of money spent in the planning stage you come away with a properly designed project with a well-defined scope, schedule, and budget.

Tuesday, February 05, 2008

PTS’ 2008 Predictions for the Data Center Industry

I consider myself a veteran of the data center design industry. Additionally, I have the good fortune to visit as many as fifty data centers and computer rooms in the course of a year. And while I have seen good ones and bad ones, they all seem to share certain commonalities. As a result of my experiences and research covering a broad scope of concerns, I have compiled a list of the challenges the data center industry as a whole will face over the next few years.

The talent pool of senior level experts is disappearing. Worse yet, as a nation we have not educated tomorrow’s engineers and/or technicians. This severe lack of experts will be an ever present obstacle to sustainable corporate growth due to technology evolution. In turn, this threatens the nation’s overall economic growth and will cause the United States to fall as the technical leader of the world. Our only saving grace will be to embrace the new world order and adapt to global solutions.

The original equipment manufacturers will own the data center design space. This is their best recourse in maintaining an ability to sell their ever improving infrastructure to customers with old, out-dated, ill-prepared facilities. A further prediction is that it will be difficult for these OEMs to provide heterogeneous and not self-serving designs. And even if they can, will clients believe it to be so?

Big surprise, data centers and computer rooms nationwide are running out of power, cooling, and space. Furthermore, due to the high capital cost and the time it takes to undertake a computer room improvement project, operators will choose not to. My prediction is that this will lead to business impacting disruptions for at least 20% of businesses over the next three (3) years.

We will run out of utility power producing capacity as a nation before the technical revolution is over. Furthermore, no amount of ‘green’ building will prohibit this from inevitably happening. Like virtualization has been for processing capacity, ‘greenness’ is only an incremental band aid on the proverbial bullet wound. My prediction is that the U.S. will experience more wide area outages, such as the one in August of 2003, in the near future.

As the saying goes, ‘necessity is the mother of invention’. We had better hope so. My final prediction is that our technological leadership as a nation will be saved not by a band aid application, by a grassroots conservation effort, or by sheer will alone. Ultimately, it will be saved by a sweeping improvement in the efficiency of how power is used by IT infrastructure. Materials research within the semiconductor industry will yield a massive reduction in the power dissipation of IT infrastructure. As a result, companies worldwide will take advantage by refreshing their IT equipment, thus allowing them to survive using their existing aging facilities and support infrastructure.

What is your number one prediction for the industry in the coming years? Whether you’re optimistic or foresee doom and gloom, I would love to hear what you think.

PTS' 2008 Predictions for the Data Center Industry

I consider myself a veteran of the data center design industry. Additionally, I have the good fortune to visit as many as fifty data centers and computer rooms in the course of a year. And while I have seen good ones and bad ones, they all seem to share certain commonalities. As a result of my experiences and research covering a broad scope of concerns, I have compiled a list of the challenges the data center industry as a whole will face over the next few years.



The talent pool of senior level experts is disappearing. Worse yet, as a nation we have not educated tomorrow’s engineers and/or technicians. This severe lack of experts will be an ever present obstacle to sustainable corporate growth due to technology evolution. In turn, this threatens the nation’s overall economic growth and will cause the United States to fall as the technical leader of the world. Our only saving grace will be to embrace the new world order and adapt to global solutions.



The original equipment manufacturers will own the data center design space. This is their best recourse in maintaining an ability to sell their ever improving infrastructure to customers with old, out-dated, ill-prepared facilities. A further prediction is that it will be difficult for these OEMs to provide heterogeneous and not self-serving designs. And even if they can, will clients believe it to be so?



Big surprise, data centers and computer rooms nationwide are running out of power, cooling, and space. Furthermore, due to the high capital cost and the time it takes to undertake a computer room improvement project, operators will choose not to. My prediction is that this will lead to business impacting disruptions for at least 20% of businesses over the next three (3) years.


We will run out of utility power producing capacity as a nation before the technical revolution is over. Furthermore, no amount of ‘green’ building will prohibit this from inevitably happening. Like virtualization has been for processing capacity, ‘greenness’ is only an incremental band aid on the proverbial bullet wound. My prediction is that the U.S. will experience more wide area outages, such as the one in August of 2003, in the near future.



As the saying goes, ‘necessity is the mother of invention’. We had better hope so. My final prediction is that our technological leadership as a nation will be saved not by a band aid application, by a grassroots conservation effort, or by sheer will alone. Ultimately, it will be saved by a sweeping improvement in the efficiency of how power is used by IT infrastructure. Materials research within the semiconductor industry will yield a massive reduction in the power dissipation of IT infrastructure. As a result, companies worldwide will take advantage by refreshing their IT equipment, thus allowing them to survive using their existing aging facilities and support infrastructure.



What is your number one prediction for the industry in the coming years? Whether you are optimistic or foresee doom and gloom, I would love to hear what you think.

Wednesday, January 30, 2008

2008 Data Center Industry Trends

A recent article from Network World points to security as the dominant issue for the data center design industry in 2008. Potential threats identified by experts include:

  • Malware attacks which piggy back on major events such as the ‘08 Olympics or the US Presidential Elections
  • The opportunity for the first serious security exploit in corporate VoIP networks
  • Additional malware vulnerability for users as participation in Web 2.0 continues to grow

Other important issues for 2008 as identified by Network World staff include:
  • The early adoption of 802.11n WLAN technology
  • A shift in IT’s approach to managing mission critical environments as virtualization and green computing are deployed more broadly
  • The growing acceptance of open source technology at the corporate level
  • Tightly controlled budgets as IT spending growth drops (particularly in response to news of economic recession)
  • Increased demand for “IT hybrids” – professionals with both business acumen and technical know-how – as the most sought-after hires

Source: Security dominates 2008 IT agenda